00:00:00.001 Started by upstream project "autotest-per-patch" build number 132741 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.021 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:02.710 The recommended git tool is: git 00:00:02.711 using credential 00000000-0000-0000-0000-000000000002 00:00:02.712 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.724 Fetching changes from the remote Git repository 00:00:02.729 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.740 Using shallow fetch with depth 1 00:00:02.740 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.740 > git --version # timeout=10 00:00:02.751 > git --version # 'git version 2.39.2' 00:00:02.751 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.763 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.763 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.238 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.251 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.264 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.264 > git config core.sparsecheckout # timeout=10 00:00:08.278 > git read-tree -mu HEAD # timeout=10 00:00:08.296 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.319 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.319 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.462 [Pipeline] Start of Pipeline 00:00:08.476 [Pipeline] library 00:00:08.477 Loading library shm_lib@master 00:00:08.477 Library shm_lib@master is cached. Copying from home. 00:00:08.491 [Pipeline] node 00:00:08.500 Running on VM-host-WFP1 in /var/jenkins/workspace/raid-vg-autotest 00:00:08.502 [Pipeline] { 00:00:08.508 [Pipeline] catchError 00:00:08.509 [Pipeline] { 00:00:08.518 [Pipeline] wrap 00:00:08.525 [Pipeline] { 00:00:08.530 [Pipeline] stage 00:00:08.532 [Pipeline] { (Prologue) 00:00:08.544 [Pipeline] echo 00:00:08.545 Node: VM-host-WFP1 00:00:08.550 [Pipeline] cleanWs 00:00:08.557 [WS-CLEANUP] Deleting project workspace... 00:00:08.557 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.562 [WS-CLEANUP] done 00:00:08.770 [Pipeline] setCustomBuildProperty 00:00:08.859 [Pipeline] httpRequest 00:00:09.700 [Pipeline] echo 00:00:09.701 Sorcerer 10.211.164.101 is alive 00:00:09.709 [Pipeline] retry 00:00:09.711 [Pipeline] { 00:00:09.721 [Pipeline] httpRequest 00:00:09.725 HttpMethod: GET 00:00:09.725 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.725 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.748 Response Code: HTTP/1.1 200 OK 00:00:09.749 Success: Status code 200 is in the accepted range: 200,404 00:00:09.749 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:39.284 [Pipeline] } 00:00:39.300 [Pipeline] // retry 00:00:39.306 [Pipeline] sh 00:00:39.587 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:39.602 [Pipeline] httpRequest 00:00:40.044 [Pipeline] echo 00:00:40.046 Sorcerer 10.211.164.101 is alive 00:00:40.055 [Pipeline] retry 00:00:40.057 [Pipeline] { 00:00:40.072 [Pipeline] httpRequest 00:00:40.076 HttpMethod: GET 00:00:40.077 URL: http://10.211.164.101/packages/spdk_a718549f7f2ba154e5c93e2f12405251b207d7e2.tar.gz 00:00:40.077 Sending request to url: http://10.211.164.101/packages/spdk_a718549f7f2ba154e5c93e2f12405251b207d7e2.tar.gz 00:00:40.088 Response Code: HTTP/1.1 200 OK 00:00:40.089 Success: Status code 200 is in the accepted range: 200,404 00:00:40.089 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_a718549f7f2ba154e5c93e2f12405251b207d7e2.tar.gz 00:03:06.603 [Pipeline] } 00:03:06.622 [Pipeline] // retry 00:03:06.630 [Pipeline] sh 00:03:06.914 + tar --no-same-owner -xf spdk_a718549f7f2ba154e5c93e2f12405251b207d7e2.tar.gz 00:03:09.459 [Pipeline] sh 00:03:09.739 + git -C spdk log --oneline -n5 00:03:09.739 a718549f7 nvme/rdma: Don't limit max_sge if UMR is used 00:03:09.740 82349efc6 nvme/rdma: Register UMR per IO request 00:03:09.740 52436cfa9 accel/mlx5: Support mkey registration 00:03:09.740 55a400896 accel/mlx5: Create pool of UMRs 00:03:09.740 562857cff lib/mlx5: API to configure UMR 00:03:09.758 [Pipeline] writeFile 00:03:09.773 [Pipeline] sh 00:03:10.057 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:10.068 [Pipeline] sh 00:03:10.349 + cat autorun-spdk.conf 00:03:10.349 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:10.349 SPDK_RUN_ASAN=1 00:03:10.349 SPDK_RUN_UBSAN=1 00:03:10.349 SPDK_TEST_RAID=1 00:03:10.349 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:10.356 RUN_NIGHTLY=0 00:03:10.358 [Pipeline] } 00:03:10.370 [Pipeline] // stage 00:03:10.384 [Pipeline] stage 00:03:10.386 [Pipeline] { (Run VM) 00:03:10.398 [Pipeline] sh 00:03:10.747 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:10.747 + echo 'Start stage prepare_nvme.sh' 00:03:10.747 Start stage prepare_nvme.sh 00:03:10.747 + [[ -n 2 ]] 00:03:10.747 + disk_prefix=ex2 00:03:10.747 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:03:10.747 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:03:10.747 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:03:10.747 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:10.747 ++ SPDK_RUN_ASAN=1 00:03:10.747 ++ SPDK_RUN_UBSAN=1 00:03:10.747 ++ SPDK_TEST_RAID=1 00:03:10.747 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:10.747 ++ RUN_NIGHTLY=0 00:03:10.747 + cd /var/jenkins/workspace/raid-vg-autotest 00:03:10.747 + nvme_files=() 00:03:10.747 + declare -A nvme_files 00:03:10.747 + backend_dir=/var/lib/libvirt/images/backends 00:03:10.747 + nvme_files['nvme.img']=5G 00:03:10.747 + nvme_files['nvme-cmb.img']=5G 00:03:10.747 + nvme_files['nvme-multi0.img']=4G 00:03:10.747 + nvme_files['nvme-multi1.img']=4G 00:03:10.747 + nvme_files['nvme-multi2.img']=4G 00:03:10.747 + nvme_files['nvme-openstack.img']=8G 00:03:10.747 + nvme_files['nvme-zns.img']=5G 00:03:10.747 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:10.747 + (( SPDK_TEST_FTL == 1 )) 00:03:10.747 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:10.747 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:10.747 + for nvme in "${!nvme_files[@]}" 00:03:10.747 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:03:10.747 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:10.747 + for nvme in "${!nvme_files[@]}" 00:03:10.747 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:03:10.747 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:10.747 + for nvme in "${!nvme_files[@]}" 00:03:10.747 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:03:10.747 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:10.747 + for nvme in "${!nvme_files[@]}" 00:03:10.747 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:03:10.747 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:10.747 + for nvme in "${!nvme_files[@]}" 00:03:10.747 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:03:10.747 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:10.747 + for nvme in "${!nvme_files[@]}" 00:03:10.747 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:03:10.747 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:10.747 + for nvme in "${!nvme_files[@]}" 00:03:10.747 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:03:11.006 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:11.006 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:03:11.006 + echo 'End stage prepare_nvme.sh' 00:03:11.006 End stage prepare_nvme.sh 00:03:11.017 [Pipeline] sh 00:03:11.298 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:11.298 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:03:11.298 00:03:11.298 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:03:11.298 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:03:11.298 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:03:11.298 HELP=0 00:03:11.298 DRY_RUN=0 00:03:11.298 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:03:11.298 NVME_DISKS_TYPE=nvme,nvme, 00:03:11.298 NVME_AUTO_CREATE=0 00:03:11.298 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:03:11.298 NVME_CMB=,, 00:03:11.298 NVME_PMR=,, 00:03:11.298 NVME_ZNS=,, 00:03:11.298 NVME_MS=,, 00:03:11.298 NVME_FDP=,, 00:03:11.298 SPDK_VAGRANT_DISTRO=fedora39 00:03:11.298 SPDK_VAGRANT_VMCPU=10 00:03:11.298 SPDK_VAGRANT_VMRAM=12288 00:03:11.298 SPDK_VAGRANT_PROVIDER=libvirt 00:03:11.298 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:11.298 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:11.298 SPDK_OPENSTACK_NETWORK=0 00:03:11.298 VAGRANT_PACKAGE_BOX=0 00:03:11.298 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:11.298 FORCE_DISTRO=true 00:03:11.298 VAGRANT_BOX_VERSION= 00:03:11.298 EXTRA_VAGRANTFILES= 00:03:11.298 NIC_MODEL=e1000 00:03:11.298 00:03:11.298 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:03:11.298 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:03:13.828 Bringing machine 'default' up with 'libvirt' provider... 00:03:15.222 ==> default: Creating image (snapshot of base box volume). 00:03:15.222 ==> default: Creating domain with the following settings... 00:03:15.222 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733498997_788c67d2f6b2ff73773d 00:03:15.222 ==> default: -- Domain type: kvm 00:03:15.222 ==> default: -- Cpus: 10 00:03:15.222 ==> default: -- Feature: acpi 00:03:15.222 ==> default: -- Feature: apic 00:03:15.222 ==> default: -- Feature: pae 00:03:15.222 ==> default: -- Memory: 12288M 00:03:15.222 ==> default: -- Memory Backing: hugepages: 00:03:15.222 ==> default: -- Management MAC: 00:03:15.222 ==> default: -- Loader: 00:03:15.222 ==> default: -- Nvram: 00:03:15.222 ==> default: -- Base box: spdk/fedora39 00:03:15.222 ==> default: -- Storage pool: default 00:03:15.222 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733498997_788c67d2f6b2ff73773d.img (20G) 00:03:15.222 ==> default: -- Volume Cache: default 00:03:15.222 ==> default: -- Kernel: 00:03:15.222 ==> default: -- Initrd: 00:03:15.222 ==> default: -- Graphics Type: vnc 00:03:15.222 ==> default: -- Graphics Port: -1 00:03:15.222 ==> default: -- Graphics IP: 127.0.0.1 00:03:15.222 ==> default: -- Graphics Password: Not defined 00:03:15.222 ==> default: -- Video Type: cirrus 00:03:15.222 ==> default: -- Video VRAM: 9216 00:03:15.222 ==> default: -- Sound Type: 00:03:15.222 ==> default: -- Keymap: en-us 00:03:15.222 ==> default: -- TPM Path: 00:03:15.222 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:15.222 ==> default: -- Command line args: 00:03:15.222 ==> default: -> value=-device, 00:03:15.222 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:15.222 ==> default: -> value=-drive, 00:03:15.222 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:03:15.222 ==> default: -> value=-device, 00:03:15.222 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:15.222 ==> default: -> value=-device, 00:03:15.222 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:15.222 ==> default: -> value=-drive, 00:03:15.222 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:15.222 ==> default: -> value=-device, 00:03:15.222 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:15.222 ==> default: -> value=-drive, 00:03:15.222 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:15.222 ==> default: -> value=-device, 00:03:15.222 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:15.222 ==> default: -> value=-drive, 00:03:15.222 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:15.222 ==> default: -> value=-device, 00:03:15.222 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:15.480 ==> default: Creating shared folders metadata... 00:03:15.480 ==> default: Starting domain. 00:03:18.022 ==> default: Waiting for domain to get an IP address... 00:03:36.124 ==> default: Waiting for SSH to become available... 00:03:36.124 ==> default: Configuring and enabling network interfaces... 00:03:40.317 default: SSH address: 192.168.121.12:22 00:03:40.317 default: SSH username: vagrant 00:03:40.317 default: SSH auth method: private key 00:03:43.602 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:53.577 ==> default: Mounting SSHFS shared folder... 00:03:55.479 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:55.479 ==> default: Checking Mount.. 00:03:57.381 ==> default: Folder Successfully Mounted! 00:03:57.381 ==> default: Running provisioner: file... 00:03:58.318 default: ~/.gitconfig => .gitconfig 00:03:58.576 00:03:58.576 SUCCESS! 00:03:58.576 00:03:58.576 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:58.576 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:58.576 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:58.576 00:03:58.585 [Pipeline] } 00:03:58.601 [Pipeline] // stage 00:03:58.611 [Pipeline] dir 00:03:58.611 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:03:58.613 [Pipeline] { 00:03:58.627 [Pipeline] catchError 00:03:58.629 [Pipeline] { 00:03:58.642 [Pipeline] sh 00:03:58.924 + vagrant ssh-config --host vagrant 00:03:58.924 + sed -ne /^Host/,$p 00:03:58.924 + tee ssh_conf 00:04:01.564 Host vagrant 00:04:01.564 HostName 192.168.121.12 00:04:01.564 User vagrant 00:04:01.564 Port 22 00:04:01.564 UserKnownHostsFile /dev/null 00:04:01.564 StrictHostKeyChecking no 00:04:01.564 PasswordAuthentication no 00:04:01.564 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:01.564 IdentitiesOnly yes 00:04:01.564 LogLevel FATAL 00:04:01.564 ForwardAgent yes 00:04:01.564 ForwardX11 yes 00:04:01.564 00:04:01.576 [Pipeline] withEnv 00:04:01.578 [Pipeline] { 00:04:01.588 [Pipeline] sh 00:04:01.866 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:01.866 source /etc/os-release 00:04:01.866 [[ -e /image.version ]] && img=$(< /image.version) 00:04:01.866 # Minimal, systemd-like check. 00:04:01.866 if [[ -e /.dockerenv ]]; then 00:04:01.866 # Clear garbage from the node's name: 00:04:01.866 # agt-er_autotest_547-896 -> autotest_547-896 00:04:01.866 # $HOSTNAME is the actual container id 00:04:01.866 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:01.866 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:01.866 # We can assume this is a mount from a host where container is running, 00:04:01.866 # so fetch its hostname to easily identify the target swarm worker. 00:04:01.866 container="$(< /etc/hostname) ($agent)" 00:04:01.866 else 00:04:01.866 # Fallback 00:04:01.866 container=$agent 00:04:01.866 fi 00:04:01.866 fi 00:04:01.866 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:01.866 00:04:02.132 [Pipeline] } 00:04:02.144 [Pipeline] // withEnv 00:04:02.151 [Pipeline] setCustomBuildProperty 00:04:02.163 [Pipeline] stage 00:04:02.165 [Pipeline] { (Tests) 00:04:02.181 [Pipeline] sh 00:04:02.461 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:02.730 [Pipeline] sh 00:04:03.010 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:03.281 [Pipeline] timeout 00:04:03.281 Timeout set to expire in 1 hr 30 min 00:04:03.283 [Pipeline] { 00:04:03.296 [Pipeline] sh 00:04:03.577 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:04.143 HEAD is now at a718549f7 nvme/rdma: Don't limit max_sge if UMR is used 00:04:04.156 [Pipeline] sh 00:04:04.512 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:04.783 [Pipeline] sh 00:04:05.063 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:05.336 [Pipeline] sh 00:04:05.615 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:04:05.874 ++ readlink -f spdk_repo 00:04:05.874 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:05.874 + [[ -n /home/vagrant/spdk_repo ]] 00:04:05.874 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:05.874 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:05.874 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:05.874 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:05.874 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:05.874 + [[ raid-vg-autotest == pkgdep-* ]] 00:04:05.874 + cd /home/vagrant/spdk_repo 00:04:05.874 + source /etc/os-release 00:04:05.874 ++ NAME='Fedora Linux' 00:04:05.874 ++ VERSION='39 (Cloud Edition)' 00:04:05.874 ++ ID=fedora 00:04:05.874 ++ VERSION_ID=39 00:04:05.874 ++ VERSION_CODENAME= 00:04:05.874 ++ PLATFORM_ID=platform:f39 00:04:05.874 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:05.874 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:05.874 ++ LOGO=fedora-logo-icon 00:04:05.874 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:05.874 ++ HOME_URL=https://fedoraproject.org/ 00:04:05.874 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:05.874 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:05.874 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:05.874 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:05.874 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:05.874 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:05.874 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:05.874 ++ SUPPORT_END=2024-11-12 00:04:05.874 ++ VARIANT='Cloud Edition' 00:04:05.874 ++ VARIANT_ID=cloud 00:04:05.874 + uname -a 00:04:05.874 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:05.874 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:06.441 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.441 Hugepages 00:04:06.441 node hugesize free / total 00:04:06.441 node0 1048576kB 0 / 0 00:04:06.441 node0 2048kB 0 / 0 00:04:06.441 00:04:06.441 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:06.441 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:06.441 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:06.442 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:06.442 + rm -f /tmp/spdk-ld-path 00:04:06.442 + source autorun-spdk.conf 00:04:06.442 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:06.442 ++ SPDK_RUN_ASAN=1 00:04:06.442 ++ SPDK_RUN_UBSAN=1 00:04:06.442 ++ SPDK_TEST_RAID=1 00:04:06.442 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:06.442 ++ RUN_NIGHTLY=0 00:04:06.442 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:06.442 + [[ -n '' ]] 00:04:06.442 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:06.711 + for M in /var/spdk/build-*-manifest.txt 00:04:06.711 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:06.711 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:06.711 + for M in /var/spdk/build-*-manifest.txt 00:04:06.711 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:06.711 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:06.711 + for M in /var/spdk/build-*-manifest.txt 00:04:06.711 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:06.711 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:06.711 ++ uname 00:04:06.711 + [[ Linux == \L\i\n\u\x ]] 00:04:06.711 + sudo dmesg -T 00:04:06.711 + sudo dmesg --clear 00:04:06.711 + dmesg_pid=5216 00:04:06.711 + [[ Fedora Linux == FreeBSD ]] 00:04:06.711 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:06.711 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:06.711 + sudo dmesg -Tw 00:04:06.711 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:06.711 + [[ -x /usr/src/fio-static/fio ]] 00:04:06.711 + export FIO_BIN=/usr/src/fio-static/fio 00:04:06.711 + FIO_BIN=/usr/src/fio-static/fio 00:04:06.711 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:06.711 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:06.711 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:06.711 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:06.711 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:06.711 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:06.711 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:06.711 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:06.711 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:06.711 15:30:49 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:06.711 15:30:49 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:06.711 15:30:49 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:06.711 15:30:49 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:04:06.711 15:30:49 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:04:06.711 15:30:49 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:04:06.711 15:30:49 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:06.711 15:30:49 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:04:06.711 15:30:49 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:06.711 15:30:49 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:06.970 15:30:50 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:06.970 15:30:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:06.970 15:30:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:06.970 15:30:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:06.970 15:30:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:06.970 15:30:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:06.970 15:30:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.970 15:30:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.970 15:30:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.970 15:30:50 -- paths/export.sh@5 -- $ export PATH 00:04:06.970 15:30:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.970 15:30:50 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:06.970 15:30:50 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:06.970 15:30:50 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733499050.XXXXXX 00:04:06.970 15:30:50 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733499050.lZNc7Y 00:04:06.970 15:30:50 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:06.970 15:30:50 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:06.970 15:30:50 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:06.970 15:30:50 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:06.970 15:30:50 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:06.970 15:30:50 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:06.970 15:30:50 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:06.970 15:30:50 -- common/autotest_common.sh@10 -- $ set +x 00:04:06.970 15:30:50 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:04:06.970 15:30:50 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:06.970 15:30:50 -- pm/common@17 -- $ local monitor 00:04:06.970 15:30:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.970 15:30:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.970 15:30:50 -- pm/common@25 -- $ sleep 1 00:04:06.970 15:30:50 -- pm/common@21 -- $ date +%s 00:04:06.970 15:30:50 -- pm/common@21 -- $ date +%s 00:04:06.970 15:30:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733499050 00:04:06.970 15:30:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733499050 00:04:06.970 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733499050_collect-vmstat.pm.log 00:04:06.970 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733499050_collect-cpu-load.pm.log 00:04:07.908 15:30:51 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:07.908 15:30:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:07.908 15:30:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:07.908 15:30:51 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:07.908 15:30:51 -- spdk/autobuild.sh@16 -- $ date -u 00:04:07.908 Fri Dec 6 03:30:51 PM UTC 2024 00:04:07.908 15:30:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:07.908 v25.01-pre-308-ga718549f7 00:04:07.908 15:30:51 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:07.908 15:30:51 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:07.908 15:30:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:07.908 15:30:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:07.908 15:30:51 -- common/autotest_common.sh@10 -- $ set +x 00:04:07.908 ************************************ 00:04:07.908 START TEST asan 00:04:07.908 ************************************ 00:04:07.908 using asan 00:04:07.908 15:30:51 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:07.908 00:04:07.908 real 0m0.001s 00:04:07.908 user 0m0.001s 00:04:07.908 sys 0m0.000s 00:04:07.908 15:30:51 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:07.908 15:30:51 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:07.908 ************************************ 00:04:07.908 END TEST asan 00:04:07.908 ************************************ 00:04:08.167 15:30:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:08.167 15:30:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:08.167 15:30:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:08.167 15:30:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:08.167 15:30:51 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.167 ************************************ 00:04:08.167 START TEST ubsan 00:04:08.167 ************************************ 00:04:08.167 using ubsan 00:04:08.167 15:30:51 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:08.167 00:04:08.167 real 0m0.000s 00:04:08.167 user 0m0.000s 00:04:08.167 sys 0m0.000s 00:04:08.167 15:30:51 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:08.167 15:30:51 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:08.167 ************************************ 00:04:08.167 END TEST ubsan 00:04:08.167 ************************************ 00:04:08.167 15:30:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:08.167 15:30:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:08.167 15:30:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:08.167 15:30:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:08.167 15:30:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:08.167 15:30:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:08.167 15:30:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:08.167 15:30:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:08.167 15:30:51 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:04:08.427 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:08.427 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:08.994 Using 'verbs' RDMA provider 00:04:28.006 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:42.958 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:42.958 Creating mk/config.mk...done. 00:04:42.958 Creating mk/cc.flags.mk...done. 00:04:42.958 Type 'make' to build. 00:04:42.958 15:31:25 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:42.958 15:31:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:42.958 15:31:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:42.958 15:31:25 -- common/autotest_common.sh@10 -- $ set +x 00:04:42.958 ************************************ 00:04:42.958 START TEST make 00:04:42.958 ************************************ 00:04:42.958 15:31:25 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:42.958 make[1]: Nothing to be done for 'all'. 00:04:55.163 The Meson build system 00:04:55.163 Version: 1.5.0 00:04:55.163 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:55.163 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:55.163 Build type: native build 00:04:55.163 Program cat found: YES (/usr/bin/cat) 00:04:55.163 Project name: DPDK 00:04:55.163 Project version: 24.03.0 00:04:55.163 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:55.163 C linker for the host machine: cc ld.bfd 2.40-14 00:04:55.163 Host machine cpu family: x86_64 00:04:55.163 Host machine cpu: x86_64 00:04:55.163 Message: ## Building in Developer Mode ## 00:04:55.163 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:55.163 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:55.163 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:55.163 Program python3 found: YES (/usr/bin/python3) 00:04:55.163 Program cat found: YES (/usr/bin/cat) 00:04:55.163 Compiler for C supports arguments -march=native: YES 00:04:55.163 Checking for size of "void *" : 8 00:04:55.163 Checking for size of "void *" : 8 (cached) 00:04:55.163 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:55.163 Library m found: YES 00:04:55.163 Library numa found: YES 00:04:55.163 Has header "numaif.h" : YES 00:04:55.163 Library fdt found: NO 00:04:55.163 Library execinfo found: NO 00:04:55.163 Has header "execinfo.h" : YES 00:04:55.163 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:55.163 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:55.163 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:55.163 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:55.163 Run-time dependency openssl found: YES 3.1.1 00:04:55.163 Run-time dependency libpcap found: YES 1.10.4 00:04:55.163 Has header "pcap.h" with dependency libpcap: YES 00:04:55.163 Compiler for C supports arguments -Wcast-qual: YES 00:04:55.163 Compiler for C supports arguments -Wdeprecated: YES 00:04:55.163 Compiler for C supports arguments -Wformat: YES 00:04:55.163 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:55.163 Compiler for C supports arguments -Wformat-security: NO 00:04:55.163 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:55.163 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:55.163 Compiler for C supports arguments -Wnested-externs: YES 00:04:55.163 Compiler for C supports arguments -Wold-style-definition: YES 00:04:55.163 Compiler for C supports arguments -Wpointer-arith: YES 00:04:55.163 Compiler for C supports arguments -Wsign-compare: YES 00:04:55.163 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:55.163 Compiler for C supports arguments -Wundef: YES 00:04:55.163 Compiler for C supports arguments -Wwrite-strings: YES 00:04:55.163 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:55.163 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:55.163 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:55.163 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:55.163 Program objdump found: YES (/usr/bin/objdump) 00:04:55.163 Compiler for C supports arguments -mavx512f: YES 00:04:55.163 Checking if "AVX512 checking" compiles: YES 00:04:55.163 Fetching value of define "__SSE4_2__" : 1 00:04:55.163 Fetching value of define "__AES__" : 1 00:04:55.163 Fetching value of define "__AVX__" : 1 00:04:55.163 Fetching value of define "__AVX2__" : 1 00:04:55.163 Fetching value of define "__AVX512BW__" : 1 00:04:55.163 Fetching value of define "__AVX512CD__" : 1 00:04:55.163 Fetching value of define "__AVX512DQ__" : 1 00:04:55.163 Fetching value of define "__AVX512F__" : 1 00:04:55.163 Fetching value of define "__AVX512VL__" : 1 00:04:55.163 Fetching value of define "__PCLMUL__" : 1 00:04:55.163 Fetching value of define "__RDRND__" : 1 00:04:55.163 Fetching value of define "__RDSEED__" : 1 00:04:55.163 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:55.163 Fetching value of define "__znver1__" : (undefined) 00:04:55.163 Fetching value of define "__znver2__" : (undefined) 00:04:55.163 Fetching value of define "__znver3__" : (undefined) 00:04:55.163 Fetching value of define "__znver4__" : (undefined) 00:04:55.163 Library asan found: YES 00:04:55.163 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:55.163 Message: lib/log: Defining dependency "log" 00:04:55.163 Message: lib/kvargs: Defining dependency "kvargs" 00:04:55.163 Message: lib/telemetry: Defining dependency "telemetry" 00:04:55.163 Library rt found: YES 00:04:55.163 Checking for function "getentropy" : NO 00:04:55.163 Message: lib/eal: Defining dependency "eal" 00:04:55.163 Message: lib/ring: Defining dependency "ring" 00:04:55.163 Message: lib/rcu: Defining dependency "rcu" 00:04:55.163 Message: lib/mempool: Defining dependency "mempool" 00:04:55.163 Message: lib/mbuf: Defining dependency "mbuf" 00:04:55.163 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:55.163 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:55.163 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:55.163 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:55.163 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:55.163 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:55.163 Compiler for C supports arguments -mpclmul: YES 00:04:55.163 Compiler for C supports arguments -maes: YES 00:04:55.164 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:55.164 Compiler for C supports arguments -mavx512bw: YES 00:04:55.164 Compiler for C supports arguments -mavx512dq: YES 00:04:55.164 Compiler for C supports arguments -mavx512vl: YES 00:04:55.164 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:55.164 Compiler for C supports arguments -mavx2: YES 00:04:55.164 Compiler for C supports arguments -mavx: YES 00:04:55.164 Message: lib/net: Defining dependency "net" 00:04:55.164 Message: lib/meter: Defining dependency "meter" 00:04:55.164 Message: lib/ethdev: Defining dependency "ethdev" 00:04:55.164 Message: lib/pci: Defining dependency "pci" 00:04:55.164 Message: lib/cmdline: Defining dependency "cmdline" 00:04:55.164 Message: lib/hash: Defining dependency "hash" 00:04:55.164 Message: lib/timer: Defining dependency "timer" 00:04:55.164 Message: lib/compressdev: Defining dependency "compressdev" 00:04:55.164 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:55.164 Message: lib/dmadev: Defining dependency "dmadev" 00:04:55.164 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:55.164 Message: lib/power: Defining dependency "power" 00:04:55.164 Message: lib/reorder: Defining dependency "reorder" 00:04:55.164 Message: lib/security: Defining dependency "security" 00:04:55.164 Has header "linux/userfaultfd.h" : YES 00:04:55.164 Has header "linux/vduse.h" : YES 00:04:55.164 Message: lib/vhost: Defining dependency "vhost" 00:04:55.164 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:55.164 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:55.164 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:55.164 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:55.164 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:55.164 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:55.164 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:55.164 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:55.164 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:55.164 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:55.164 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:55.164 Configuring doxy-api-html.conf using configuration 00:04:55.164 Configuring doxy-api-man.conf using configuration 00:04:55.164 Program mandb found: YES (/usr/bin/mandb) 00:04:55.164 Program sphinx-build found: NO 00:04:55.164 Configuring rte_build_config.h using configuration 00:04:55.164 Message: 00:04:55.164 ================= 00:04:55.164 Applications Enabled 00:04:55.164 ================= 00:04:55.164 00:04:55.164 apps: 00:04:55.164 00:04:55.164 00:04:55.164 Message: 00:04:55.164 ================= 00:04:55.164 Libraries Enabled 00:04:55.164 ================= 00:04:55.164 00:04:55.164 libs: 00:04:55.164 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:55.164 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:55.164 cryptodev, dmadev, power, reorder, security, vhost, 00:04:55.164 00:04:55.164 Message: 00:04:55.164 =============== 00:04:55.164 Drivers Enabled 00:04:55.164 =============== 00:04:55.164 00:04:55.164 common: 00:04:55.164 00:04:55.164 bus: 00:04:55.164 pci, vdev, 00:04:55.164 mempool: 00:04:55.164 ring, 00:04:55.164 dma: 00:04:55.164 00:04:55.164 net: 00:04:55.164 00:04:55.164 crypto: 00:04:55.164 00:04:55.164 compress: 00:04:55.164 00:04:55.164 vdpa: 00:04:55.164 00:04:55.164 00:04:55.164 Message: 00:04:55.164 ================= 00:04:55.164 Content Skipped 00:04:55.164 ================= 00:04:55.164 00:04:55.164 apps: 00:04:55.164 dumpcap: explicitly disabled via build config 00:04:55.164 graph: explicitly disabled via build config 00:04:55.164 pdump: explicitly disabled via build config 00:04:55.164 proc-info: explicitly disabled via build config 00:04:55.164 test-acl: explicitly disabled via build config 00:04:55.164 test-bbdev: explicitly disabled via build config 00:04:55.164 test-cmdline: explicitly disabled via build config 00:04:55.164 test-compress-perf: explicitly disabled via build config 00:04:55.164 test-crypto-perf: explicitly disabled via build config 00:04:55.164 test-dma-perf: explicitly disabled via build config 00:04:55.164 test-eventdev: explicitly disabled via build config 00:04:55.164 test-fib: explicitly disabled via build config 00:04:55.164 test-flow-perf: explicitly disabled via build config 00:04:55.164 test-gpudev: explicitly disabled via build config 00:04:55.164 test-mldev: explicitly disabled via build config 00:04:55.164 test-pipeline: explicitly disabled via build config 00:04:55.164 test-pmd: explicitly disabled via build config 00:04:55.164 test-regex: explicitly disabled via build config 00:04:55.164 test-sad: explicitly disabled via build config 00:04:55.164 test-security-perf: explicitly disabled via build config 00:04:55.164 00:04:55.164 libs: 00:04:55.164 argparse: explicitly disabled via build config 00:04:55.164 metrics: explicitly disabled via build config 00:04:55.164 acl: explicitly disabled via build config 00:04:55.164 bbdev: explicitly disabled via build config 00:04:55.164 bitratestats: explicitly disabled via build config 00:04:55.164 bpf: explicitly disabled via build config 00:04:55.164 cfgfile: explicitly disabled via build config 00:04:55.164 distributor: explicitly disabled via build config 00:04:55.164 efd: explicitly disabled via build config 00:04:55.164 eventdev: explicitly disabled via build config 00:04:55.164 dispatcher: explicitly disabled via build config 00:04:55.164 gpudev: explicitly disabled via build config 00:04:55.164 gro: explicitly disabled via build config 00:04:55.164 gso: explicitly disabled via build config 00:04:55.164 ip_frag: explicitly disabled via build config 00:04:55.164 jobstats: explicitly disabled via build config 00:04:55.164 latencystats: explicitly disabled via build config 00:04:55.164 lpm: explicitly disabled via build config 00:04:55.164 member: explicitly disabled via build config 00:04:55.164 pcapng: explicitly disabled via build config 00:04:55.164 rawdev: explicitly disabled via build config 00:04:55.164 regexdev: explicitly disabled via build config 00:04:55.164 mldev: explicitly disabled via build config 00:04:55.164 rib: explicitly disabled via build config 00:04:55.164 sched: explicitly disabled via build config 00:04:55.164 stack: explicitly disabled via build config 00:04:55.164 ipsec: explicitly disabled via build config 00:04:55.164 pdcp: explicitly disabled via build config 00:04:55.164 fib: explicitly disabled via build config 00:04:55.164 port: explicitly disabled via build config 00:04:55.164 pdump: explicitly disabled via build config 00:04:55.164 table: explicitly disabled via build config 00:04:55.164 pipeline: explicitly disabled via build config 00:04:55.164 graph: explicitly disabled via build config 00:04:55.164 node: explicitly disabled via build config 00:04:55.164 00:04:55.164 drivers: 00:04:55.164 common/cpt: not in enabled drivers build config 00:04:55.164 common/dpaax: not in enabled drivers build config 00:04:55.164 common/iavf: not in enabled drivers build config 00:04:55.164 common/idpf: not in enabled drivers build config 00:04:55.164 common/ionic: not in enabled drivers build config 00:04:55.164 common/mvep: not in enabled drivers build config 00:04:55.164 common/octeontx: not in enabled drivers build config 00:04:55.164 bus/auxiliary: not in enabled drivers build config 00:04:55.164 bus/cdx: not in enabled drivers build config 00:04:55.164 bus/dpaa: not in enabled drivers build config 00:04:55.164 bus/fslmc: not in enabled drivers build config 00:04:55.164 bus/ifpga: not in enabled drivers build config 00:04:55.164 bus/platform: not in enabled drivers build config 00:04:55.164 bus/uacce: not in enabled drivers build config 00:04:55.164 bus/vmbus: not in enabled drivers build config 00:04:55.164 common/cnxk: not in enabled drivers build config 00:04:55.164 common/mlx5: not in enabled drivers build config 00:04:55.164 common/nfp: not in enabled drivers build config 00:04:55.164 common/nitrox: not in enabled drivers build config 00:04:55.164 common/qat: not in enabled drivers build config 00:04:55.164 common/sfc_efx: not in enabled drivers build config 00:04:55.164 mempool/bucket: not in enabled drivers build config 00:04:55.164 mempool/cnxk: not in enabled drivers build config 00:04:55.164 mempool/dpaa: not in enabled drivers build config 00:04:55.164 mempool/dpaa2: not in enabled drivers build config 00:04:55.164 mempool/octeontx: not in enabled drivers build config 00:04:55.164 mempool/stack: not in enabled drivers build config 00:04:55.164 dma/cnxk: not in enabled drivers build config 00:04:55.164 dma/dpaa: not in enabled drivers build config 00:04:55.164 dma/dpaa2: not in enabled drivers build config 00:04:55.164 dma/hisilicon: not in enabled drivers build config 00:04:55.164 dma/idxd: not in enabled drivers build config 00:04:55.164 dma/ioat: not in enabled drivers build config 00:04:55.164 dma/skeleton: not in enabled drivers build config 00:04:55.164 net/af_packet: not in enabled drivers build config 00:04:55.164 net/af_xdp: not in enabled drivers build config 00:04:55.164 net/ark: not in enabled drivers build config 00:04:55.164 net/atlantic: not in enabled drivers build config 00:04:55.164 net/avp: not in enabled drivers build config 00:04:55.164 net/axgbe: not in enabled drivers build config 00:04:55.164 net/bnx2x: not in enabled drivers build config 00:04:55.164 net/bnxt: not in enabled drivers build config 00:04:55.164 net/bonding: not in enabled drivers build config 00:04:55.164 net/cnxk: not in enabled drivers build config 00:04:55.164 net/cpfl: not in enabled drivers build config 00:04:55.164 net/cxgbe: not in enabled drivers build config 00:04:55.164 net/dpaa: not in enabled drivers build config 00:04:55.164 net/dpaa2: not in enabled drivers build config 00:04:55.164 net/e1000: not in enabled drivers build config 00:04:55.164 net/ena: not in enabled drivers build config 00:04:55.164 net/enetc: not in enabled drivers build config 00:04:55.164 net/enetfec: not in enabled drivers build config 00:04:55.164 net/enic: not in enabled drivers build config 00:04:55.164 net/failsafe: not in enabled drivers build config 00:04:55.164 net/fm10k: not in enabled drivers build config 00:04:55.164 net/gve: not in enabled drivers build config 00:04:55.164 net/hinic: not in enabled drivers build config 00:04:55.164 net/hns3: not in enabled drivers build config 00:04:55.164 net/i40e: not in enabled drivers build config 00:04:55.164 net/iavf: not in enabled drivers build config 00:04:55.164 net/ice: not in enabled drivers build config 00:04:55.164 net/idpf: not in enabled drivers build config 00:04:55.164 net/igc: not in enabled drivers build config 00:04:55.164 net/ionic: not in enabled drivers build config 00:04:55.164 net/ipn3ke: not in enabled drivers build config 00:04:55.164 net/ixgbe: not in enabled drivers build config 00:04:55.164 net/mana: not in enabled drivers build config 00:04:55.164 net/memif: not in enabled drivers build config 00:04:55.164 net/mlx4: not in enabled drivers build config 00:04:55.164 net/mlx5: not in enabled drivers build config 00:04:55.164 net/mvneta: not in enabled drivers build config 00:04:55.164 net/mvpp2: not in enabled drivers build config 00:04:55.164 net/netvsc: not in enabled drivers build config 00:04:55.164 net/nfb: not in enabled drivers build config 00:04:55.164 net/nfp: not in enabled drivers build config 00:04:55.165 net/ngbe: not in enabled drivers build config 00:04:55.165 net/null: not in enabled drivers build config 00:04:55.165 net/octeontx: not in enabled drivers build config 00:04:55.165 net/octeon_ep: not in enabled drivers build config 00:04:55.165 net/pcap: not in enabled drivers build config 00:04:55.165 net/pfe: not in enabled drivers build config 00:04:55.165 net/qede: not in enabled drivers build config 00:04:55.165 net/ring: not in enabled drivers build config 00:04:55.165 net/sfc: not in enabled drivers build config 00:04:55.165 net/softnic: not in enabled drivers build config 00:04:55.165 net/tap: not in enabled drivers build config 00:04:55.165 net/thunderx: not in enabled drivers build config 00:04:55.165 net/txgbe: not in enabled drivers build config 00:04:55.165 net/vdev_netvsc: not in enabled drivers build config 00:04:55.165 net/vhost: not in enabled drivers build config 00:04:55.165 net/virtio: not in enabled drivers build config 00:04:55.165 net/vmxnet3: not in enabled drivers build config 00:04:55.165 raw/*: missing internal dependency, "rawdev" 00:04:55.165 crypto/armv8: not in enabled drivers build config 00:04:55.165 crypto/bcmfs: not in enabled drivers build config 00:04:55.165 crypto/caam_jr: not in enabled drivers build config 00:04:55.165 crypto/ccp: not in enabled drivers build config 00:04:55.165 crypto/cnxk: not in enabled drivers build config 00:04:55.165 crypto/dpaa_sec: not in enabled drivers build config 00:04:55.165 crypto/dpaa2_sec: not in enabled drivers build config 00:04:55.165 crypto/ipsec_mb: not in enabled drivers build config 00:04:55.165 crypto/mlx5: not in enabled drivers build config 00:04:55.165 crypto/mvsam: not in enabled drivers build config 00:04:55.165 crypto/nitrox: not in enabled drivers build config 00:04:55.165 crypto/null: not in enabled drivers build config 00:04:55.165 crypto/octeontx: not in enabled drivers build config 00:04:55.165 crypto/openssl: not in enabled drivers build config 00:04:55.165 crypto/scheduler: not in enabled drivers build config 00:04:55.165 crypto/uadk: not in enabled drivers build config 00:04:55.165 crypto/virtio: not in enabled drivers build config 00:04:55.165 compress/isal: not in enabled drivers build config 00:04:55.165 compress/mlx5: not in enabled drivers build config 00:04:55.165 compress/nitrox: not in enabled drivers build config 00:04:55.165 compress/octeontx: not in enabled drivers build config 00:04:55.165 compress/zlib: not in enabled drivers build config 00:04:55.165 regex/*: missing internal dependency, "regexdev" 00:04:55.165 ml/*: missing internal dependency, "mldev" 00:04:55.165 vdpa/ifc: not in enabled drivers build config 00:04:55.165 vdpa/mlx5: not in enabled drivers build config 00:04:55.165 vdpa/nfp: not in enabled drivers build config 00:04:55.165 vdpa/sfc: not in enabled drivers build config 00:04:55.165 event/*: missing internal dependency, "eventdev" 00:04:55.165 baseband/*: missing internal dependency, "bbdev" 00:04:55.165 gpu/*: missing internal dependency, "gpudev" 00:04:55.165 00:04:55.165 00:04:55.165 Build targets in project: 85 00:04:55.165 00:04:55.165 DPDK 24.03.0 00:04:55.165 00:04:55.165 User defined options 00:04:55.165 buildtype : debug 00:04:55.165 default_library : shared 00:04:55.165 libdir : lib 00:04:55.165 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:55.165 b_sanitize : address 00:04:55.165 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:55.165 c_link_args : 00:04:55.165 cpu_instruction_set: native 00:04:55.165 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:55.165 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:55.165 enable_docs : false 00:04:55.165 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:55.165 enable_kmods : false 00:04:55.165 max_lcores : 128 00:04:55.165 tests : false 00:04:55.165 00:04:55.165 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:55.165 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:55.165 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:55.165 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:55.165 [3/268] Linking static target lib/librte_log.a 00:04:55.165 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:55.165 [5/268] Linking static target lib/librte_kvargs.a 00:04:55.165 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:55.165 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:55.165 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:55.165 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:55.165 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:55.165 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:55.165 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:55.165 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:55.165 [14/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.165 [15/268] Linking static target lib/librte_telemetry.a 00:04:55.165 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:55.165 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:55.439 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:55.439 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.439 [20/268] Linking target lib/librte_log.so.24.1 00:04:55.439 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:55.439 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:55.695 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:55.695 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:55.695 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:55.695 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:55.695 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:55.695 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:55.953 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:55.953 [30/268] Linking target lib/librte_kvargs.so.24.1 00:04:55.953 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.953 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:56.209 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:56.209 [34/268] Linking target lib/librte_telemetry.so.24.1 00:04:56.209 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:56.209 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:56.468 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:56.468 [38/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:56.468 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:56.468 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:56.468 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:56.468 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:56.468 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:56.468 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:56.468 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:56.726 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:56.726 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:56.986 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:56.986 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:56.986 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:57.245 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:57.245 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:57.245 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:57.245 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:57.245 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:57.504 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:57.504 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:57.504 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:57.504 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:57.504 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:57.763 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:57.763 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:58.021 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:58.021 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:58.021 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:58.021 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:58.021 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:58.021 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:58.279 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:58.279 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:58.279 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:58.537 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:58.537 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:58.537 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:58.537 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:58.537 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:58.537 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:58.537 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:58.537 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:58.795 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:58.795 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:58.795 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:58.795 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:58.795 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:59.068 [85/268] Linking static target lib/librte_ring.a 00:04:59.068 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:59.068 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:59.068 [88/268] Linking static target lib/librte_eal.a 00:04:59.068 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:59.068 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:59.340 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:59.340 [92/268] Linking static target lib/librte_mempool.a 00:04:59.340 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.340 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:59.599 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:59.599 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:59.599 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:59.599 [98/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:59.857 [99/268] Linking static target lib/librte_rcu.a 00:04:59.857 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:59.857 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:59.857 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:59.857 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:00.117 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:00.117 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:00.117 [106/268] Linking static target lib/librte_net.a 00:05:00.376 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:00.376 [108/268] Linking static target lib/librte_meter.a 00:05:00.376 [109/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.376 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:00.376 [111/268] Linking static target lib/librte_mbuf.a 00:05:00.376 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:00.376 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:00.376 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:00.635 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.635 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.635 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.894 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:01.153 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:01.411 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:01.411 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:01.411 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:01.411 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.978 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:01.978 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:01.978 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:01.978 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:01.978 [128/268] Linking static target lib/librte_pci.a 00:05:01.978 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:02.237 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:02.237 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:02.237 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:02.237 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:02.237 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:02.237 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:02.237 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:02.237 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:02.237 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:02.237 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:02.237 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:02.237 [141/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.237 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:02.495 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:02.495 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:02.495 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:02.752 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:02.753 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:02.753 [148/268] Linking static target lib/librte_cmdline.a 00:05:02.753 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:03.010 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:03.010 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:03.010 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:03.010 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:03.010 [154/268] Linking static target lib/librte_ethdev.a 00:05:03.281 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:03.281 [156/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:03.281 [157/268] Linking static target lib/librte_timer.a 00:05:03.540 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:03.540 [159/268] Linking static target lib/librte_compressdev.a 00:05:03.540 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:03.540 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:03.540 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:03.540 [163/268] Linking static target lib/librte_hash.a 00:05:03.799 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:03.799 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:04.057 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:04.057 [167/268] Linking static target lib/librte_dmadev.a 00:05:04.057 [168/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.057 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:04.316 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:04.316 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:04.316 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:04.575 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.575 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.575 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:04.834 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:04.835 [177/268] Linking static target lib/librte_cryptodev.a 00:05:04.835 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:04.835 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.835 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:04.835 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:04.835 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:05.095 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:05.095 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.095 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:05.095 [186/268] Linking static target lib/librte_power.a 00:05:05.393 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:05.720 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:05.720 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:05.720 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:05.720 [191/268] Linking static target lib/librte_security.a 00:05:05.721 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:05.721 [193/268] Linking static target lib/librte_reorder.a 00:05:05.979 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:06.544 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.544 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.544 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.544 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:06.802 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:06.802 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:07.061 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:07.061 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:07.061 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:07.061 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:07.319 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:07.319 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.319 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:07.319 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:07.577 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:07.577 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:07.577 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:07.834 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:07.834 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:07.834 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:07.834 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:07.834 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:07.834 [217/268] Linking static target drivers/librte_bus_pci.a 00:05:07.834 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:07.834 [219/268] Linking static target drivers/librte_bus_vdev.a 00:05:07.834 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:07.834 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:08.092 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:08.092 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:08.092 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:08.092 [225/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.092 [226/268] Linking static target drivers/librte_mempool_ring.a 00:05:08.350 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.916 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:12.216 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.216 [230/268] Linking target lib/librte_eal.so.24.1 00:05:12.503 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:12.503 [232/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:12.503 [233/268] Linking target lib/librte_pci.so.24.1 00:05:12.503 [234/268] Linking target lib/librte_ring.so.24.1 00:05:12.503 [235/268] Linking target lib/librte_meter.so.24.1 00:05:12.503 [236/268] Linking target lib/librte_timer.so.24.1 00:05:12.503 [237/268] Linking target lib/librte_dmadev.so.24.1 00:05:12.503 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:12.503 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:12.503 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:12.503 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:12.503 [242/268] Linking target lib/librte_rcu.so.24.1 00:05:12.503 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:12.761 [244/268] Linking target lib/librte_mempool.so.24.1 00:05:12.761 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:12.761 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:12.761 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:12.761 [248/268] Linking target lib/librte_mbuf.so.24.1 00:05:12.761 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:13.021 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:13.021 [251/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.021 [252/268] Linking target lib/librte_compressdev.so.24.1 00:05:13.021 [253/268] Linking target lib/librte_net.so.24.1 00:05:13.021 [254/268] Linking target lib/librte_reorder.so.24.1 00:05:13.021 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:05:13.280 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:13.280 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:13.280 [258/268] Linking target lib/librte_cmdline.so.24.1 00:05:13.280 [259/268] Linking target lib/librte_security.so.24.1 00:05:13.280 [260/268] Linking target lib/librte_hash.so.24.1 00:05:13.280 [261/268] Linking target lib/librte_ethdev.so.24.1 00:05:13.280 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:13.539 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:13.539 [264/268] Linking target lib/librte_power.so.24.1 00:05:13.539 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:13.798 [266/268] Linking static target lib/librte_vhost.a 00:05:16.331 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.331 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:16.331 INFO: autodetecting backend as ninja 00:05:16.331 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:34.415 CC lib/ut_mock/mock.o 00:05:34.415 CC lib/log/log.o 00:05:34.415 CC lib/log/log_flags.o 00:05:34.415 CC lib/log/log_deprecated.o 00:05:34.415 CC lib/ut/ut.o 00:05:34.415 LIB libspdk_ut_mock.a 00:05:34.415 LIB libspdk_log.a 00:05:34.415 LIB libspdk_ut.a 00:05:34.415 SO libspdk_ut_mock.so.6.0 00:05:34.415 SO libspdk_log.so.7.1 00:05:34.415 SO libspdk_ut.so.2.0 00:05:34.415 SYMLINK libspdk_ut_mock.so 00:05:34.415 SYMLINK libspdk_log.so 00:05:34.415 SYMLINK libspdk_ut.so 00:05:34.415 CC lib/dma/dma.o 00:05:34.415 CC lib/util/crc16.o 00:05:34.415 CC lib/util/bit_array.o 00:05:34.416 CC lib/util/crc32.o 00:05:34.416 CC lib/util/crc32c.o 00:05:34.416 CC lib/util/base64.o 00:05:34.416 CC lib/ioat/ioat.o 00:05:34.416 CC lib/util/cpuset.o 00:05:34.416 CXX lib/trace_parser/trace.o 00:05:34.416 CC lib/vfio_user/host/vfio_user_pci.o 00:05:34.416 CC lib/util/crc32_ieee.o 00:05:34.416 CC lib/vfio_user/host/vfio_user.o 00:05:34.416 CC lib/util/crc64.o 00:05:34.416 LIB libspdk_dma.a 00:05:34.416 CC lib/util/dif.o 00:05:34.416 CC lib/util/fd.o 00:05:34.416 SO libspdk_dma.so.5.0 00:05:34.416 CC lib/util/fd_group.o 00:05:34.416 CC lib/util/file.o 00:05:34.416 SYMLINK libspdk_dma.so 00:05:34.416 CC lib/util/hexlify.o 00:05:34.416 LIB libspdk_ioat.a 00:05:34.416 CC lib/util/iov.o 00:05:34.416 SO libspdk_ioat.so.7.0 00:05:34.416 CC lib/util/math.o 00:05:34.416 SYMLINK libspdk_ioat.so 00:05:34.416 CC lib/util/net.o 00:05:34.416 CC lib/util/pipe.o 00:05:34.416 LIB libspdk_vfio_user.a 00:05:34.416 CC lib/util/strerror_tls.o 00:05:34.416 SO libspdk_vfio_user.so.5.0 00:05:34.416 CC lib/util/string.o 00:05:34.416 CC lib/util/uuid.o 00:05:34.416 SYMLINK libspdk_vfio_user.so 00:05:34.416 CC lib/util/xor.o 00:05:34.416 CC lib/util/zipf.o 00:05:34.416 CC lib/util/md5.o 00:05:34.416 LIB libspdk_util.a 00:05:34.673 SO libspdk_util.so.10.1 00:05:34.931 LIB libspdk_trace_parser.a 00:05:34.931 SO libspdk_trace_parser.so.6.0 00:05:34.931 SYMLINK libspdk_util.so 00:05:34.931 SYMLINK libspdk_trace_parser.so 00:05:35.190 CC lib/rdma_utils/rdma_utils.o 00:05:35.190 CC lib/vmd/vmd.o 00:05:35.190 CC lib/vmd/led.o 00:05:35.190 CC lib/conf/conf.o 00:05:35.190 CC lib/json/json_write.o 00:05:35.190 CC lib/json/json_util.o 00:05:35.190 CC lib/json/json_parse.o 00:05:35.190 CC lib/env_dpdk/env.o 00:05:35.190 CC lib/env_dpdk/memory.o 00:05:35.190 CC lib/idxd/idxd.o 00:05:35.190 CC lib/idxd/idxd_user.o 00:05:35.448 LIB libspdk_conf.a 00:05:35.448 LIB libspdk_rdma_utils.a 00:05:35.448 CC lib/env_dpdk/pci.o 00:05:35.448 CC lib/env_dpdk/init.o 00:05:35.448 SO libspdk_conf.so.6.0 00:05:35.448 SO libspdk_rdma_utils.so.1.0 00:05:35.448 SYMLINK libspdk_conf.so 00:05:35.448 CC lib/env_dpdk/threads.o 00:05:35.448 SYMLINK libspdk_rdma_utils.so 00:05:35.448 CC lib/env_dpdk/pci_ioat.o 00:05:35.448 LIB libspdk_json.a 00:05:35.706 SO libspdk_json.so.6.0 00:05:35.706 CC lib/idxd/idxd_kernel.o 00:05:35.706 CC lib/env_dpdk/pci_virtio.o 00:05:35.706 SYMLINK libspdk_json.so 00:05:35.706 CC lib/env_dpdk/pci_vmd.o 00:05:35.706 CC lib/env_dpdk/pci_idxd.o 00:05:35.706 CC lib/env_dpdk/pci_event.o 00:05:35.964 CC lib/env_dpdk/sigbus_handler.o 00:05:35.964 CC lib/env_dpdk/pci_dpdk.o 00:05:35.964 CC lib/rdma_provider/common.o 00:05:35.964 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:35.964 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:35.964 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:35.964 LIB libspdk_idxd.a 00:05:35.964 LIB libspdk_vmd.a 00:05:36.223 SO libspdk_idxd.so.12.1 00:05:36.223 SO libspdk_vmd.so.6.0 00:05:36.223 SYMLINK libspdk_vmd.so 00:05:36.223 SYMLINK libspdk_idxd.so 00:05:36.223 LIB libspdk_rdma_provider.a 00:05:36.223 CC lib/jsonrpc/jsonrpc_server.o 00:05:36.223 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:36.223 CC lib/jsonrpc/jsonrpc_client.o 00:05:36.223 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:36.223 SO libspdk_rdma_provider.so.7.0 00:05:36.482 SYMLINK libspdk_rdma_provider.so 00:05:36.482 LIB libspdk_jsonrpc.a 00:05:36.741 SO libspdk_jsonrpc.so.6.0 00:05:36.741 SYMLINK libspdk_jsonrpc.so 00:05:37.309 CC lib/rpc/rpc.o 00:05:37.310 LIB libspdk_env_dpdk.a 00:05:37.310 SO libspdk_env_dpdk.so.15.1 00:05:37.310 LIB libspdk_rpc.a 00:05:37.310 SO libspdk_rpc.so.6.0 00:05:37.569 SYMLINK libspdk_rpc.so 00:05:37.569 SYMLINK libspdk_env_dpdk.so 00:05:37.828 CC lib/keyring/keyring.o 00:05:37.828 CC lib/keyring/keyring_rpc.o 00:05:37.828 CC lib/trace/trace.o 00:05:37.828 CC lib/trace/trace_flags.o 00:05:37.828 CC lib/trace/trace_rpc.o 00:05:37.828 CC lib/notify/notify.o 00:05:37.828 CC lib/notify/notify_rpc.o 00:05:38.087 LIB libspdk_notify.a 00:05:38.087 LIB libspdk_keyring.a 00:05:38.087 SO libspdk_notify.so.6.0 00:05:38.087 SO libspdk_keyring.so.2.0 00:05:38.087 LIB libspdk_trace.a 00:05:38.087 SYMLINK libspdk_notify.so 00:05:38.087 SYMLINK libspdk_keyring.so 00:05:38.087 SO libspdk_trace.so.11.0 00:05:38.345 SYMLINK libspdk_trace.so 00:05:38.604 CC lib/thread/thread.o 00:05:38.604 CC lib/thread/iobuf.o 00:05:38.604 CC lib/sock/sock.o 00:05:38.604 CC lib/sock/sock_rpc.o 00:05:39.173 LIB libspdk_sock.a 00:05:39.173 SO libspdk_sock.so.10.0 00:05:39.431 SYMLINK libspdk_sock.so 00:05:39.691 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:39.691 CC lib/nvme/nvme_ns_cmd.o 00:05:39.691 CC lib/nvme/nvme_ctrlr.o 00:05:39.691 CC lib/nvme/nvme_fabric.o 00:05:39.691 CC lib/nvme/nvme_ns.o 00:05:39.691 CC lib/nvme/nvme_pcie_common.o 00:05:39.691 CC lib/nvme/nvme.o 00:05:39.691 CC lib/nvme/nvme_qpair.o 00:05:39.691 CC lib/nvme/nvme_pcie.o 00:05:40.626 LIB libspdk_thread.a 00:05:40.626 SO libspdk_thread.so.11.0 00:05:40.626 CC lib/nvme/nvme_quirks.o 00:05:40.626 CC lib/nvme/nvme_transport.o 00:05:40.626 CC lib/nvme/nvme_discovery.o 00:05:40.626 SYMLINK libspdk_thread.so 00:05:40.626 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:40.884 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:40.884 CC lib/accel/accel.o 00:05:40.884 CC lib/blob/blobstore.o 00:05:40.884 CC lib/init/json_config.o 00:05:40.884 CC lib/nvme/nvme_tcp.o 00:05:41.143 CC lib/init/subsystem.o 00:05:41.143 CC lib/accel/accel_rpc.o 00:05:41.402 CC lib/virtio/virtio.o 00:05:41.402 CC lib/fsdev/fsdev.o 00:05:41.402 CC lib/fsdev/fsdev_io.o 00:05:41.402 CC lib/init/subsystem_rpc.o 00:05:41.402 CC lib/init/rpc.o 00:05:41.402 CC lib/fsdev/fsdev_rpc.o 00:05:41.660 CC lib/blob/request.o 00:05:41.660 LIB libspdk_init.a 00:05:41.660 CC lib/blob/zeroes.o 00:05:41.660 SO libspdk_init.so.6.0 00:05:41.660 CC lib/virtio/virtio_vhost_user.o 00:05:41.660 SYMLINK libspdk_init.so 00:05:41.660 CC lib/blob/blob_bs_dev.o 00:05:41.926 CC lib/virtio/virtio_vfio_user.o 00:05:41.926 CC lib/nvme/nvme_opal.o 00:05:41.926 CC lib/event/app.o 00:05:41.926 LIB libspdk_fsdev.a 00:05:41.926 CC lib/event/reactor.o 00:05:41.926 CC lib/virtio/virtio_pci.o 00:05:42.192 SO libspdk_fsdev.so.2.0 00:05:42.192 CC lib/accel/accel_sw.o 00:05:42.192 CC lib/event/log_rpc.o 00:05:42.192 SYMLINK libspdk_fsdev.so 00:05:42.192 CC lib/nvme/nvme_io_msg.o 00:05:42.192 CC lib/event/app_rpc.o 00:05:42.451 LIB libspdk_virtio.a 00:05:42.451 SO libspdk_virtio.so.7.0 00:05:42.451 LIB libspdk_accel.a 00:05:42.451 CC lib/event/scheduler_static.o 00:05:42.451 SO libspdk_accel.so.16.0 00:05:42.451 SYMLINK libspdk_virtio.so 00:05:42.451 CC lib/nvme/nvme_poll_group.o 00:05:42.451 CC lib/nvme/nvme_zns.o 00:05:42.451 CC lib/nvme/nvme_stubs.o 00:05:42.709 CC lib/nvme/nvme_auth.o 00:05:42.709 SYMLINK libspdk_accel.so 00:05:42.709 CC lib/nvme/nvme_cuse.o 00:05:42.709 LIB libspdk_event.a 00:05:42.709 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:42.709 SO libspdk_event.so.14.0 00:05:42.709 SYMLINK libspdk_event.so 00:05:42.709 CC lib/nvme/nvme_rdma.o 00:05:42.967 CC lib/bdev/bdev.o 00:05:42.967 CC lib/bdev/bdev_rpc.o 00:05:42.967 CC lib/bdev/bdev_zone.o 00:05:43.225 CC lib/bdev/part.o 00:05:43.225 CC lib/bdev/scsi_nvme.o 00:05:43.483 LIB libspdk_fuse_dispatcher.a 00:05:43.483 SO libspdk_fuse_dispatcher.so.1.0 00:05:43.483 SYMLINK libspdk_fuse_dispatcher.so 00:05:44.417 LIB libspdk_nvme.a 00:05:44.675 LIB libspdk_blob.a 00:05:44.675 SO libspdk_nvme.so.15.0 00:05:44.675 SO libspdk_blob.so.12.0 00:05:44.675 SYMLINK libspdk_blob.so 00:05:44.934 SYMLINK libspdk_nvme.so 00:05:45.192 CC lib/blobfs/blobfs.o 00:05:45.192 CC lib/blobfs/tree.o 00:05:45.192 CC lib/lvol/lvol.o 00:05:46.127 LIB libspdk_blobfs.a 00:05:46.127 SO libspdk_blobfs.so.11.0 00:05:46.127 LIB libspdk_bdev.a 00:05:46.127 SYMLINK libspdk_blobfs.so 00:05:46.385 SO libspdk_bdev.so.17.0 00:05:46.385 LIB libspdk_lvol.a 00:05:46.385 SO libspdk_lvol.so.11.0 00:05:46.385 SYMLINK libspdk_bdev.so 00:05:46.385 SYMLINK libspdk_lvol.so 00:05:46.644 CC lib/nbd/nbd.o 00:05:46.644 CC lib/nbd/nbd_rpc.o 00:05:46.644 CC lib/scsi/dev.o 00:05:46.644 CC lib/scsi/lun.o 00:05:46.644 CC lib/ublk/ublk.o 00:05:46.644 CC lib/ublk/ublk_rpc.o 00:05:46.644 CC lib/scsi/port.o 00:05:46.644 CC lib/scsi/scsi.o 00:05:46.644 CC lib/nvmf/ctrlr.o 00:05:46.644 CC lib/ftl/ftl_core.o 00:05:46.902 CC lib/scsi/scsi_bdev.o 00:05:46.902 CC lib/nvmf/ctrlr_discovery.o 00:05:46.902 CC lib/scsi/scsi_pr.o 00:05:46.902 CC lib/ftl/ftl_init.o 00:05:46.902 CC lib/scsi/scsi_rpc.o 00:05:47.161 CC lib/scsi/task.o 00:05:47.161 CC lib/ftl/ftl_layout.o 00:05:47.161 CC lib/ftl/ftl_debug.o 00:05:47.161 CC lib/ftl/ftl_io.o 00:05:47.161 LIB libspdk_nbd.a 00:05:47.161 SO libspdk_nbd.so.7.0 00:05:47.418 CC lib/nvmf/ctrlr_bdev.o 00:05:47.418 CC lib/nvmf/subsystem.o 00:05:47.418 SYMLINK libspdk_nbd.so 00:05:47.418 CC lib/nvmf/nvmf.o 00:05:47.418 CC lib/nvmf/nvmf_rpc.o 00:05:47.418 CC lib/ftl/ftl_sb.o 00:05:47.418 LIB libspdk_scsi.a 00:05:47.418 LIB libspdk_ublk.a 00:05:47.418 CC lib/nvmf/transport.o 00:05:47.418 CC lib/ftl/ftl_l2p.o 00:05:47.678 SO libspdk_scsi.so.9.0 00:05:47.678 SO libspdk_ublk.so.3.0 00:05:47.678 SYMLINK libspdk_ublk.so 00:05:47.678 CC lib/nvmf/tcp.o 00:05:47.678 CC lib/nvmf/stubs.o 00:05:47.678 SYMLINK libspdk_scsi.so 00:05:47.678 CC lib/nvmf/mdns_server.o 00:05:47.678 CC lib/ftl/ftl_l2p_flat.o 00:05:47.940 CC lib/ftl/ftl_nv_cache.o 00:05:48.199 CC lib/ftl/ftl_band.o 00:05:48.199 CC lib/ftl/ftl_band_ops.o 00:05:48.199 CC lib/ftl/ftl_writer.o 00:05:48.457 CC lib/ftl/ftl_rq.o 00:05:48.457 CC lib/ftl/ftl_reloc.o 00:05:48.457 CC lib/ftl/ftl_l2p_cache.o 00:05:48.457 CC lib/ftl/ftl_p2l.o 00:05:48.718 CC lib/ftl/ftl_p2l_log.o 00:05:48.979 CC lib/ftl/mngt/ftl_mngt.o 00:05:48.979 CC lib/iscsi/conn.o 00:05:48.979 CC lib/nvmf/rdma.o 00:05:48.979 CC lib/vhost/vhost.o 00:05:48.979 CC lib/vhost/vhost_rpc.o 00:05:48.979 CC lib/vhost/vhost_scsi.o 00:05:48.979 CC lib/vhost/vhost_blk.o 00:05:49.241 CC lib/vhost/rte_vhost_user.o 00:05:49.241 CC lib/iscsi/init_grp.o 00:05:49.241 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:49.502 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:49.502 CC lib/iscsi/iscsi.o 00:05:49.502 CC lib/iscsi/param.o 00:05:49.761 CC lib/nvmf/auth.o 00:05:49.761 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:49.761 CC lib/iscsi/portal_grp.o 00:05:50.019 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:50.019 CC lib/iscsi/tgt_node.o 00:05:50.019 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:50.019 CC lib/iscsi/iscsi_subsystem.o 00:05:50.019 CC lib/iscsi/iscsi_rpc.o 00:05:50.278 CC lib/iscsi/task.o 00:05:50.278 LIB libspdk_vhost.a 00:05:50.278 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:50.278 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:50.536 SO libspdk_vhost.so.8.0 00:05:50.536 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:50.536 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:50.536 SYMLINK libspdk_vhost.so 00:05:50.536 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:50.536 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:50.536 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:50.536 CC lib/ftl/utils/ftl_conf.o 00:05:50.536 CC lib/ftl/utils/ftl_md.o 00:05:50.795 CC lib/ftl/utils/ftl_mempool.o 00:05:50.795 CC lib/ftl/utils/ftl_bitmap.o 00:05:50.795 CC lib/ftl/utils/ftl_property.o 00:05:50.795 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:50.795 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:50.795 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:51.054 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:51.054 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:51.054 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:51.054 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:51.054 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:51.054 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:51.054 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:51.054 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:51.312 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:51.312 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:51.312 CC lib/ftl/base/ftl_base_dev.o 00:05:51.312 CC lib/ftl/base/ftl_base_bdev.o 00:05:51.312 CC lib/ftl/ftl_trace.o 00:05:51.312 LIB libspdk_iscsi.a 00:05:51.571 SO libspdk_iscsi.so.8.0 00:05:51.571 LIB libspdk_ftl.a 00:05:51.571 SYMLINK libspdk_iscsi.so 00:05:51.830 LIB libspdk_nvmf.a 00:05:51.830 SO libspdk_ftl.so.9.0 00:05:52.088 SO libspdk_nvmf.so.20.0 00:05:52.347 SYMLINK libspdk_ftl.so 00:05:52.347 SYMLINK libspdk_nvmf.so 00:05:52.913 CC module/env_dpdk/env_dpdk_rpc.o 00:05:52.913 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:52.913 CC module/scheduler/gscheduler/gscheduler.o 00:05:52.913 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:52.913 CC module/keyring/file/keyring.o 00:05:52.913 CC module/blob/bdev/blob_bdev.o 00:05:52.913 CC module/keyring/linux/keyring.o 00:05:52.913 CC module/fsdev/aio/fsdev_aio.o 00:05:52.913 CC module/accel/error/accel_error.o 00:05:52.913 CC module/sock/posix/posix.o 00:05:52.913 LIB libspdk_env_dpdk_rpc.a 00:05:52.913 SO libspdk_env_dpdk_rpc.so.6.0 00:05:52.913 SYMLINK libspdk_env_dpdk_rpc.so 00:05:52.913 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:52.913 CC module/keyring/file/keyring_rpc.o 00:05:52.913 CC module/keyring/linux/keyring_rpc.o 00:05:52.913 LIB libspdk_scheduler_gscheduler.a 00:05:52.913 LIB libspdk_scheduler_dpdk_governor.a 00:05:53.171 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:53.171 SO libspdk_scheduler_gscheduler.so.4.0 00:05:53.171 LIB libspdk_scheduler_dynamic.a 00:05:53.171 SO libspdk_scheduler_dynamic.so.4.0 00:05:53.171 CC module/accel/error/accel_error_rpc.o 00:05:53.171 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:53.171 SYMLINK libspdk_scheduler_gscheduler.so 00:05:53.171 CC module/fsdev/aio/linux_aio_mgr.o 00:05:53.171 LIB libspdk_blob_bdev.a 00:05:53.171 LIB libspdk_keyring_linux.a 00:05:53.171 SYMLINK libspdk_scheduler_dynamic.so 00:05:53.171 LIB libspdk_keyring_file.a 00:05:53.171 SO libspdk_blob_bdev.so.12.0 00:05:53.171 SO libspdk_keyring_linux.so.1.0 00:05:53.171 SO libspdk_keyring_file.so.2.0 00:05:53.171 LIB libspdk_accel_error.a 00:05:53.171 SYMLINK libspdk_blob_bdev.so 00:05:53.171 SYMLINK libspdk_keyring_linux.so 00:05:53.171 SYMLINK libspdk_keyring_file.so 00:05:53.171 SO libspdk_accel_error.so.2.0 00:05:53.171 CC module/accel/ioat/accel_ioat.o 00:05:53.429 CC module/accel/ioat/accel_ioat_rpc.o 00:05:53.429 CC module/accel/dsa/accel_dsa.o 00:05:53.429 CC module/accel/iaa/accel_iaa.o 00:05:53.429 CC module/accel/dsa/accel_dsa_rpc.o 00:05:53.429 SYMLINK libspdk_accel_error.so 00:05:53.429 CC module/accel/iaa/accel_iaa_rpc.o 00:05:53.429 LIB libspdk_accel_ioat.a 00:05:53.687 LIB libspdk_accel_iaa.a 00:05:53.687 SO libspdk_accel_ioat.so.6.0 00:05:53.687 CC module/bdev/delay/vbdev_delay.o 00:05:53.687 SO libspdk_accel_iaa.so.3.0 00:05:53.687 CC module/blobfs/bdev/blobfs_bdev.o 00:05:53.687 LIB libspdk_fsdev_aio.a 00:05:53.687 SYMLINK libspdk_accel_ioat.so 00:05:53.687 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:53.687 LIB libspdk_accel_dsa.a 00:05:53.687 CC module/bdev/error/vbdev_error.o 00:05:53.687 CC module/bdev/gpt/gpt.o 00:05:53.687 SO libspdk_fsdev_aio.so.1.0 00:05:53.687 SYMLINK libspdk_accel_iaa.so 00:05:53.687 SO libspdk_accel_dsa.so.5.0 00:05:53.687 CC module/bdev/gpt/vbdev_gpt.o 00:05:53.687 CC module/bdev/lvol/vbdev_lvol.o 00:05:53.687 LIB libspdk_sock_posix.a 00:05:53.687 SYMLINK libspdk_fsdev_aio.so 00:05:53.687 SYMLINK libspdk_accel_dsa.so 00:05:53.687 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:53.687 SO libspdk_sock_posix.so.6.0 00:05:53.687 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:53.945 SYMLINK libspdk_sock_posix.so 00:05:53.945 CC module/bdev/malloc/bdev_malloc.o 00:05:53.945 CC module/bdev/error/vbdev_error_rpc.o 00:05:53.945 LIB libspdk_blobfs_bdev.a 00:05:53.945 LIB libspdk_bdev_gpt.a 00:05:53.945 LIB libspdk_bdev_delay.a 00:05:53.945 CC module/bdev/null/bdev_null.o 00:05:53.945 SO libspdk_blobfs_bdev.so.6.0 00:05:53.945 SO libspdk_bdev_gpt.so.6.0 00:05:53.945 CC module/bdev/nvme/bdev_nvme.o 00:05:53.945 CC module/bdev/passthru/vbdev_passthru.o 00:05:53.945 SO libspdk_bdev_delay.so.6.0 00:05:54.204 SYMLINK libspdk_blobfs_bdev.so 00:05:54.204 SYMLINK libspdk_bdev_gpt.so 00:05:54.204 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:54.204 CC module/bdev/null/bdev_null_rpc.o 00:05:54.204 LIB libspdk_bdev_error.a 00:05:54.204 SYMLINK libspdk_bdev_delay.so 00:05:54.204 CC module/bdev/nvme/nvme_rpc.o 00:05:54.204 SO libspdk_bdev_error.so.6.0 00:05:54.204 SYMLINK libspdk_bdev_error.so 00:05:54.204 CC module/bdev/nvme/bdev_mdns_client.o 00:05:54.204 LIB libspdk_bdev_lvol.a 00:05:54.204 LIB libspdk_bdev_null.a 00:05:54.204 SO libspdk_bdev_lvol.so.6.0 00:05:54.204 CC module/bdev/raid/bdev_raid.o 00:05:54.463 SO libspdk_bdev_null.so.6.0 00:05:54.463 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:54.463 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:54.463 SYMLINK libspdk_bdev_lvol.so 00:05:54.463 CC module/bdev/nvme/vbdev_opal.o 00:05:54.463 SYMLINK libspdk_bdev_null.so 00:05:54.463 CC module/bdev/split/vbdev_split.o 00:05:54.463 LIB libspdk_bdev_malloc.a 00:05:54.463 LIB libspdk_bdev_passthru.a 00:05:54.463 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:54.463 CC module/bdev/aio/bdev_aio.o 00:05:54.463 SO libspdk_bdev_malloc.so.6.0 00:05:54.463 CC module/bdev/ftl/bdev_ftl.o 00:05:54.463 SO libspdk_bdev_passthru.so.6.0 00:05:54.721 SYMLINK libspdk_bdev_malloc.so 00:05:54.721 SYMLINK libspdk_bdev_passthru.so 00:05:54.721 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:54.721 CC module/bdev/aio/bdev_aio_rpc.o 00:05:54.721 CC module/bdev/raid/bdev_raid_rpc.o 00:05:54.721 CC module/bdev/split/vbdev_split_rpc.o 00:05:54.721 CC module/bdev/raid/bdev_raid_sb.o 00:05:54.721 CC module/bdev/raid/raid0.o 00:05:54.721 LIB libspdk_bdev_split.a 00:05:55.018 CC module/bdev/raid/raid1.o 00:05:55.018 CC module/bdev/raid/concat.o 00:05:55.018 LIB libspdk_bdev_ftl.a 00:05:55.018 SO libspdk_bdev_split.so.6.0 00:05:55.018 SO libspdk_bdev_ftl.so.6.0 00:05:55.018 LIB libspdk_bdev_aio.a 00:05:55.018 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:55.018 SO libspdk_bdev_aio.so.6.0 00:05:55.018 SYMLINK libspdk_bdev_split.so 00:05:55.018 SYMLINK libspdk_bdev_ftl.so 00:05:55.018 SYMLINK libspdk_bdev_aio.so 00:05:55.018 CC module/bdev/raid/raid5f.o 00:05:55.018 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:55.018 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:55.278 LIB libspdk_bdev_zone_block.a 00:05:55.278 SO libspdk_bdev_zone_block.so.6.0 00:05:55.278 CC module/bdev/iscsi/bdev_iscsi.o 00:05:55.278 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:55.278 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:55.278 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:55.278 SYMLINK libspdk_bdev_zone_block.so 00:05:55.278 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:55.538 LIB libspdk_bdev_raid.a 00:05:55.538 LIB libspdk_bdev_iscsi.a 00:05:55.797 SO libspdk_bdev_iscsi.so.6.0 00:05:55.797 SO libspdk_bdev_raid.so.6.0 00:05:55.797 SYMLINK libspdk_bdev_iscsi.so 00:05:55.797 LIB libspdk_bdev_virtio.a 00:05:55.798 SYMLINK libspdk_bdev_raid.so 00:05:55.798 SO libspdk_bdev_virtio.so.6.0 00:05:56.057 SYMLINK libspdk_bdev_virtio.so 00:05:56.998 LIB libspdk_bdev_nvme.a 00:05:56.998 SO libspdk_bdev_nvme.so.7.1 00:05:57.262 SYMLINK libspdk_bdev_nvme.so 00:05:57.839 CC module/event/subsystems/vmd/vmd.o 00:05:57.839 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:57.839 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:57.839 CC module/event/subsystems/scheduler/scheduler.o 00:05:57.839 CC module/event/subsystems/sock/sock.o 00:05:57.839 CC module/event/subsystems/keyring/keyring.o 00:05:57.839 CC module/event/subsystems/fsdev/fsdev.o 00:05:57.839 CC module/event/subsystems/iobuf/iobuf.o 00:05:57.839 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:57.839 LIB libspdk_event_vmd.a 00:05:57.839 LIB libspdk_event_sock.a 00:05:57.839 LIB libspdk_event_keyring.a 00:05:57.839 LIB libspdk_event_fsdev.a 00:05:58.097 LIB libspdk_event_scheduler.a 00:05:58.097 LIB libspdk_event_vhost_blk.a 00:05:58.097 LIB libspdk_event_iobuf.a 00:05:58.097 SO libspdk_event_keyring.so.1.0 00:05:58.097 SO libspdk_event_sock.so.5.0 00:05:58.097 SO libspdk_event_fsdev.so.1.0 00:05:58.097 SO libspdk_event_scheduler.so.4.0 00:05:58.097 SO libspdk_event_vmd.so.6.0 00:05:58.097 SO libspdk_event_vhost_blk.so.3.0 00:05:58.097 SO libspdk_event_iobuf.so.3.0 00:05:58.097 SYMLINK libspdk_event_keyring.so 00:05:58.097 SYMLINK libspdk_event_fsdev.so 00:05:58.098 SYMLINK libspdk_event_sock.so 00:05:58.098 SYMLINK libspdk_event_scheduler.so 00:05:58.098 SYMLINK libspdk_event_vmd.so 00:05:58.098 SYMLINK libspdk_event_vhost_blk.so 00:05:58.098 SYMLINK libspdk_event_iobuf.so 00:05:58.356 CC module/event/subsystems/accel/accel.o 00:05:58.615 LIB libspdk_event_accel.a 00:05:58.615 SO libspdk_event_accel.so.6.0 00:05:58.874 SYMLINK libspdk_event_accel.so 00:05:59.134 CC module/event/subsystems/bdev/bdev.o 00:05:59.393 LIB libspdk_event_bdev.a 00:05:59.393 SO libspdk_event_bdev.so.6.0 00:05:59.393 SYMLINK libspdk_event_bdev.so 00:05:59.984 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:59.984 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:59.984 CC module/event/subsystems/nbd/nbd.o 00:05:59.984 CC module/event/subsystems/scsi/scsi.o 00:05:59.984 CC module/event/subsystems/ublk/ublk.o 00:05:59.984 LIB libspdk_event_ublk.a 00:05:59.984 LIB libspdk_event_scsi.a 00:05:59.984 LIB libspdk_event_nbd.a 00:05:59.984 SO libspdk_event_ublk.so.3.0 00:05:59.984 LIB libspdk_event_nvmf.a 00:05:59.984 SO libspdk_event_nbd.so.6.0 00:05:59.984 SO libspdk_event_scsi.so.6.0 00:06:00.243 SYMLINK libspdk_event_ublk.so 00:06:00.243 SO libspdk_event_nvmf.so.6.0 00:06:00.243 SYMLINK libspdk_event_nbd.so 00:06:00.243 SYMLINK libspdk_event_scsi.so 00:06:00.243 SYMLINK libspdk_event_nvmf.so 00:06:00.502 CC module/event/subsystems/iscsi/iscsi.o 00:06:00.502 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:00.762 LIB libspdk_event_iscsi.a 00:06:00.762 LIB libspdk_event_vhost_scsi.a 00:06:00.762 SO libspdk_event_iscsi.so.6.0 00:06:00.762 SO libspdk_event_vhost_scsi.so.3.0 00:06:00.762 SYMLINK libspdk_event_iscsi.so 00:06:00.762 SYMLINK libspdk_event_vhost_scsi.so 00:06:01.021 SO libspdk.so.6.0 00:06:01.021 SYMLINK libspdk.so 00:06:01.281 CC app/spdk_nvme_identify/identify.o 00:06:01.281 CC app/spdk_lspci/spdk_lspci.o 00:06:01.281 CXX app/trace/trace.o 00:06:01.281 CC app/spdk_nvme_perf/perf.o 00:06:01.281 CC app/trace_record/trace_record.o 00:06:01.281 CC app/iscsi_tgt/iscsi_tgt.o 00:06:01.281 CC app/nvmf_tgt/nvmf_main.o 00:06:01.281 CC app/spdk_tgt/spdk_tgt.o 00:06:01.540 CC test/thread/poller_perf/poller_perf.o 00:06:01.540 CC examples/util/zipf/zipf.o 00:06:01.540 LINK spdk_lspci 00:06:01.540 LINK iscsi_tgt 00:06:01.540 LINK poller_perf 00:06:01.540 LINK nvmf_tgt 00:06:01.540 LINK spdk_tgt 00:06:01.540 LINK spdk_trace_record 00:06:01.540 LINK zipf 00:06:01.799 LINK spdk_trace 00:06:01.799 CC app/spdk_nvme_discover/discovery_aer.o 00:06:01.799 CC app/spdk_top/spdk_top.o 00:06:02.058 CC app/spdk_dd/spdk_dd.o 00:06:02.058 CC examples/ioat/perf/perf.o 00:06:02.058 LINK spdk_nvme_discover 00:06:02.058 CC test/dma/test_dma/test_dma.o 00:06:02.058 CC app/fio/nvme/fio_plugin.o 00:06:02.058 CC test/app/bdev_svc/bdev_svc.o 00:06:02.058 CC app/fio/bdev/fio_plugin.o 00:06:02.058 LINK ioat_perf 00:06:02.317 LINK spdk_nvme_perf 00:06:02.317 LINK bdev_svc 00:06:02.317 LINK spdk_dd 00:06:02.317 LINK spdk_nvme_identify 00:06:02.317 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:02.317 CC examples/ioat/verify/verify.o 00:06:02.576 LINK test_dma 00:06:02.576 CC app/vhost/vhost.o 00:06:02.576 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:02.576 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:02.576 LINK spdk_bdev 00:06:02.576 CC test/app/histogram_perf/histogram_perf.o 00:06:02.576 LINK verify 00:06:02.576 LINK spdk_nvme 00:06:02.835 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:02.835 LINK vhost 00:06:02.835 LINK nvme_fuzz 00:06:02.835 LINK histogram_perf 00:06:02.835 CC test/app/stub/stub.o 00:06:02.835 CC test/app/jsoncat/jsoncat.o 00:06:02.835 LINK spdk_top 00:06:03.094 CC examples/vmd/lsvmd/lsvmd.o 00:06:03.094 CC examples/idxd/perf/perf.o 00:06:03.094 LINK jsoncat 00:06:03.094 LINK stub 00:06:03.094 CC examples/vmd/led/led.o 00:06:03.094 TEST_HEADER include/spdk/accel.h 00:06:03.095 TEST_HEADER include/spdk/accel_module.h 00:06:03.095 TEST_HEADER include/spdk/assert.h 00:06:03.095 LINK lsvmd 00:06:03.095 TEST_HEADER include/spdk/barrier.h 00:06:03.095 TEST_HEADER include/spdk/base64.h 00:06:03.095 TEST_HEADER include/spdk/bdev.h 00:06:03.095 TEST_HEADER include/spdk/bdev_module.h 00:06:03.095 TEST_HEADER include/spdk/bdev_zone.h 00:06:03.095 TEST_HEADER include/spdk/bit_array.h 00:06:03.095 TEST_HEADER include/spdk/bit_pool.h 00:06:03.095 TEST_HEADER include/spdk/blob_bdev.h 00:06:03.095 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:03.095 TEST_HEADER include/spdk/blobfs.h 00:06:03.095 TEST_HEADER include/spdk/blob.h 00:06:03.095 TEST_HEADER include/spdk/conf.h 00:06:03.095 TEST_HEADER include/spdk/config.h 00:06:03.095 TEST_HEADER include/spdk/cpuset.h 00:06:03.095 TEST_HEADER include/spdk/crc16.h 00:06:03.095 TEST_HEADER include/spdk/crc32.h 00:06:03.095 TEST_HEADER include/spdk/crc64.h 00:06:03.095 TEST_HEADER include/spdk/dif.h 00:06:03.095 TEST_HEADER include/spdk/dma.h 00:06:03.095 TEST_HEADER include/spdk/endian.h 00:06:03.095 TEST_HEADER include/spdk/env_dpdk.h 00:06:03.095 TEST_HEADER include/spdk/env.h 00:06:03.095 TEST_HEADER include/spdk/event.h 00:06:03.095 TEST_HEADER include/spdk/fd_group.h 00:06:03.095 TEST_HEADER include/spdk/fd.h 00:06:03.095 TEST_HEADER include/spdk/file.h 00:06:03.095 TEST_HEADER include/spdk/fsdev.h 00:06:03.095 TEST_HEADER include/spdk/fsdev_module.h 00:06:03.095 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:03.095 TEST_HEADER include/spdk/ftl.h 00:06:03.095 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:03.095 TEST_HEADER include/spdk/gpt_spec.h 00:06:03.095 TEST_HEADER include/spdk/hexlify.h 00:06:03.095 TEST_HEADER include/spdk/histogram_data.h 00:06:03.095 TEST_HEADER include/spdk/idxd.h 00:06:03.095 TEST_HEADER include/spdk/idxd_spec.h 00:06:03.095 TEST_HEADER include/spdk/init.h 00:06:03.095 TEST_HEADER include/spdk/ioat.h 00:06:03.095 TEST_HEADER include/spdk/ioat_spec.h 00:06:03.095 TEST_HEADER include/spdk/iscsi_spec.h 00:06:03.095 TEST_HEADER include/spdk/json.h 00:06:03.095 TEST_HEADER include/spdk/jsonrpc.h 00:06:03.095 TEST_HEADER include/spdk/keyring.h 00:06:03.095 TEST_HEADER include/spdk/keyring_module.h 00:06:03.095 TEST_HEADER include/spdk/likely.h 00:06:03.095 TEST_HEADER include/spdk/log.h 00:06:03.095 TEST_HEADER include/spdk/lvol.h 00:06:03.095 TEST_HEADER include/spdk/md5.h 00:06:03.095 TEST_HEADER include/spdk/memory.h 00:06:03.095 TEST_HEADER include/spdk/mmio.h 00:06:03.355 TEST_HEADER include/spdk/nbd.h 00:06:03.355 TEST_HEADER include/spdk/net.h 00:06:03.355 TEST_HEADER include/spdk/notify.h 00:06:03.355 LINK vhost_fuzz 00:06:03.355 TEST_HEADER include/spdk/nvme.h 00:06:03.355 TEST_HEADER include/spdk/nvme_intel.h 00:06:03.355 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:03.355 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:03.355 TEST_HEADER include/spdk/nvme_spec.h 00:06:03.355 TEST_HEADER include/spdk/nvme_zns.h 00:06:03.355 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:03.355 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:03.355 LINK led 00:06:03.355 TEST_HEADER include/spdk/nvmf.h 00:06:03.355 TEST_HEADER include/spdk/nvmf_spec.h 00:06:03.355 TEST_HEADER include/spdk/nvmf_transport.h 00:06:03.355 TEST_HEADER include/spdk/opal.h 00:06:03.355 TEST_HEADER include/spdk/opal_spec.h 00:06:03.355 TEST_HEADER include/spdk/pci_ids.h 00:06:03.355 TEST_HEADER include/spdk/pipe.h 00:06:03.355 TEST_HEADER include/spdk/queue.h 00:06:03.355 TEST_HEADER include/spdk/reduce.h 00:06:03.355 TEST_HEADER include/spdk/rpc.h 00:06:03.355 TEST_HEADER include/spdk/scheduler.h 00:06:03.355 CC test/env/vtophys/vtophys.o 00:06:03.355 TEST_HEADER include/spdk/scsi.h 00:06:03.355 TEST_HEADER include/spdk/scsi_spec.h 00:06:03.355 TEST_HEADER include/spdk/sock.h 00:06:03.355 TEST_HEADER include/spdk/stdinc.h 00:06:03.355 TEST_HEADER include/spdk/string.h 00:06:03.355 TEST_HEADER include/spdk/thread.h 00:06:03.355 TEST_HEADER include/spdk/trace.h 00:06:03.355 CC test/env/mem_callbacks/mem_callbacks.o 00:06:03.355 TEST_HEADER include/spdk/trace_parser.h 00:06:03.355 TEST_HEADER include/spdk/tree.h 00:06:03.355 TEST_HEADER include/spdk/ublk.h 00:06:03.355 TEST_HEADER include/spdk/util.h 00:06:03.355 TEST_HEADER include/spdk/uuid.h 00:06:03.355 TEST_HEADER include/spdk/version.h 00:06:03.355 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:03.355 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:03.355 TEST_HEADER include/spdk/vhost.h 00:06:03.355 TEST_HEADER include/spdk/vmd.h 00:06:03.355 TEST_HEADER include/spdk/xor.h 00:06:03.355 TEST_HEADER include/spdk/zipf.h 00:06:03.355 CXX test/cpp_headers/accel.o 00:06:03.355 LINK interrupt_tgt 00:06:03.355 LINK idxd_perf 00:06:03.355 CXX test/cpp_headers/accel_module.o 00:06:03.355 LINK vtophys 00:06:03.355 CC examples/thread/thread/thread_ex.o 00:06:03.614 CC examples/sock/hello_world/hello_sock.o 00:06:03.614 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:03.614 CC test/env/memory/memory_ut.o 00:06:03.614 CXX test/cpp_headers/assert.o 00:06:03.614 CXX test/cpp_headers/barrier.o 00:06:03.614 CXX test/cpp_headers/base64.o 00:06:03.614 CC test/env/pci/pci_ut.o 00:06:03.614 LINK env_dpdk_post_init 00:06:03.614 LINK thread 00:06:03.873 LINK hello_sock 00:06:03.873 CXX test/cpp_headers/bdev.o 00:06:03.873 CXX test/cpp_headers/bdev_module.o 00:06:03.873 LINK mem_callbacks 00:06:03.873 CXX test/cpp_headers/bdev_zone.o 00:06:03.873 CC test/event/event_perf/event_perf.o 00:06:03.873 CC test/event/reactor/reactor.o 00:06:04.233 CC test/event/reactor_perf/reactor_perf.o 00:06:04.233 LINK pci_ut 00:06:04.233 CC test/event/app_repeat/app_repeat.o 00:06:04.233 LINK event_perf 00:06:04.233 LINK reactor 00:06:04.233 CC examples/nvme/hello_world/hello_world.o 00:06:04.233 CXX test/cpp_headers/bit_array.o 00:06:04.233 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:04.233 LINK reactor_perf 00:06:04.233 LINK app_repeat 00:06:04.233 CXX test/cpp_headers/bit_pool.o 00:06:04.491 CXX test/cpp_headers/blob_bdev.o 00:06:04.491 LINK hello_world 00:06:04.491 CXX test/cpp_headers/blobfs_bdev.o 00:06:04.491 LINK iscsi_fuzz 00:06:04.491 LINK hello_fsdev 00:06:04.491 CC examples/accel/perf/accel_perf.o 00:06:04.491 CC examples/blob/hello_world/hello_blob.o 00:06:04.491 CC test/event/scheduler/scheduler.o 00:06:04.491 CC examples/nvme/reconnect/reconnect.o 00:06:04.491 CXX test/cpp_headers/blobfs.o 00:06:04.749 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:04.749 CC examples/blob/cli/blobcli.o 00:06:04.749 CXX test/cpp_headers/blob.o 00:06:04.749 LINK memory_ut 00:06:04.749 LINK hello_blob 00:06:04.749 CC examples/nvme/arbitration/arbitration.o 00:06:04.749 LINK scheduler 00:06:05.007 CC test/nvme/aer/aer.o 00:06:05.007 CXX test/cpp_headers/conf.o 00:06:05.007 LINK reconnect 00:06:05.007 LINK accel_perf 00:06:05.007 CXX test/cpp_headers/config.o 00:06:05.007 CXX test/cpp_headers/cpuset.o 00:06:05.266 CC test/rpc_client/rpc_client_test.o 00:06:05.266 CC test/nvme/reset/reset.o 00:06:05.266 LINK arbitration 00:06:05.266 LINK aer 00:06:05.266 CC examples/nvme/hotplug/hotplug.o 00:06:05.266 LINK nvme_manage 00:06:05.266 CC test/accel/dif/dif.o 00:06:05.266 LINK blobcli 00:06:05.266 CXX test/cpp_headers/crc16.o 00:06:05.266 LINK rpc_client_test 00:06:05.266 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:05.524 LINK reset 00:06:05.524 CXX test/cpp_headers/crc32.o 00:06:05.524 CC examples/nvme/abort/abort.o 00:06:05.524 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:05.524 LINK cmb_copy 00:06:05.524 LINK hotplug 00:06:05.782 CXX test/cpp_headers/crc64.o 00:06:05.782 CC test/blobfs/mkfs/mkfs.o 00:06:05.782 LINK pmr_persistence 00:06:05.782 CC examples/bdev/hello_world/hello_bdev.o 00:06:05.782 CC test/nvme/sgl/sgl.o 00:06:05.782 CC test/lvol/esnap/esnap.o 00:06:05.782 CC test/nvme/e2edp/nvme_dp.o 00:06:05.782 CC test/nvme/overhead/overhead.o 00:06:05.782 CXX test/cpp_headers/dif.o 00:06:05.782 CXX test/cpp_headers/dma.o 00:06:05.782 LINK mkfs 00:06:06.041 LINK abort 00:06:06.041 LINK hello_bdev 00:06:06.041 LINK dif 00:06:06.041 CXX test/cpp_headers/endian.o 00:06:06.041 LINK sgl 00:06:06.041 CXX test/cpp_headers/env_dpdk.o 00:06:06.041 LINK nvme_dp 00:06:06.041 CC test/nvme/err_injection/err_injection.o 00:06:06.301 LINK overhead 00:06:06.301 CC examples/bdev/bdevperf/bdevperf.o 00:06:06.301 CXX test/cpp_headers/env.o 00:06:06.301 CXX test/cpp_headers/event.o 00:06:06.301 LINK err_injection 00:06:06.301 CC test/nvme/startup/startup.o 00:06:06.301 CC test/nvme/reserve/reserve.o 00:06:06.560 CC test/nvme/simple_copy/simple_copy.o 00:06:06.560 CC test/nvme/connect_stress/connect_stress.o 00:06:06.560 CXX test/cpp_headers/fd_group.o 00:06:06.560 LINK startup 00:06:06.560 CC test/nvme/boot_partition/boot_partition.o 00:06:06.560 CC test/nvme/compliance/nvme_compliance.o 00:06:06.819 LINK reserve 00:06:06.819 CXX test/cpp_headers/fd.o 00:06:06.819 LINK simple_copy 00:06:06.819 CC test/nvme/fused_ordering/fused_ordering.o 00:06:06.819 LINK connect_stress 00:06:06.819 LINK boot_partition 00:06:06.819 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:06.819 CXX test/cpp_headers/file.o 00:06:07.076 LINK fused_ordering 00:06:07.076 CXX test/cpp_headers/fsdev.o 00:06:07.076 LINK nvme_compliance 00:06:07.076 CC test/nvme/fdp/fdp.o 00:06:07.076 CC test/nvme/cuse/cuse.o 00:06:07.076 LINK doorbell_aers 00:06:07.076 CXX test/cpp_headers/fsdev_module.o 00:06:07.076 CC test/bdev/bdevio/bdevio.o 00:06:07.076 CXX test/cpp_headers/ftl.o 00:06:07.076 CXX test/cpp_headers/fuse_dispatcher.o 00:06:07.334 LINK bdevperf 00:06:07.334 CXX test/cpp_headers/gpt_spec.o 00:06:07.334 CXX test/cpp_headers/hexlify.o 00:06:07.334 CXX test/cpp_headers/histogram_data.o 00:06:07.334 CXX test/cpp_headers/idxd.o 00:06:07.334 CXX test/cpp_headers/idxd_spec.o 00:06:07.334 CXX test/cpp_headers/init.o 00:06:07.334 LINK fdp 00:06:07.334 CXX test/cpp_headers/ioat.o 00:06:07.334 CXX test/cpp_headers/ioat_spec.o 00:06:07.593 LINK bdevio 00:06:07.593 CXX test/cpp_headers/iscsi_spec.o 00:06:07.593 CXX test/cpp_headers/json.o 00:06:07.593 CXX test/cpp_headers/jsonrpc.o 00:06:07.593 CXX test/cpp_headers/keyring.o 00:06:07.593 CXX test/cpp_headers/keyring_module.o 00:06:07.593 CXX test/cpp_headers/likely.o 00:06:07.593 CC examples/nvmf/nvmf/nvmf.o 00:06:07.593 CXX test/cpp_headers/log.o 00:06:07.850 CXX test/cpp_headers/lvol.o 00:06:07.850 CXX test/cpp_headers/md5.o 00:06:07.850 CXX test/cpp_headers/memory.o 00:06:07.850 CXX test/cpp_headers/mmio.o 00:06:07.850 CXX test/cpp_headers/nbd.o 00:06:07.850 CXX test/cpp_headers/net.o 00:06:07.850 CXX test/cpp_headers/notify.o 00:06:07.850 CXX test/cpp_headers/nvme.o 00:06:07.850 CXX test/cpp_headers/nvme_intel.o 00:06:07.850 CXX test/cpp_headers/nvme_ocssd.o 00:06:07.850 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:07.850 CXX test/cpp_headers/nvme_spec.o 00:06:08.108 CXX test/cpp_headers/nvme_zns.o 00:06:08.108 CXX test/cpp_headers/nvmf_cmd.o 00:06:08.108 LINK nvmf 00:06:08.108 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:08.108 CXX test/cpp_headers/nvmf.o 00:06:08.108 CXX test/cpp_headers/nvmf_spec.o 00:06:08.108 CXX test/cpp_headers/nvmf_transport.o 00:06:08.108 CXX test/cpp_headers/opal.o 00:06:08.108 CXX test/cpp_headers/opal_spec.o 00:06:08.108 CXX test/cpp_headers/pci_ids.o 00:06:08.108 CXX test/cpp_headers/pipe.o 00:06:08.365 CXX test/cpp_headers/queue.o 00:06:08.365 CXX test/cpp_headers/reduce.o 00:06:08.365 CXX test/cpp_headers/rpc.o 00:06:08.365 CXX test/cpp_headers/scheduler.o 00:06:08.365 CXX test/cpp_headers/scsi.o 00:06:08.365 CXX test/cpp_headers/scsi_spec.o 00:06:08.365 CXX test/cpp_headers/sock.o 00:06:08.365 CXX test/cpp_headers/stdinc.o 00:06:08.365 CXX test/cpp_headers/string.o 00:06:08.365 LINK cuse 00:06:08.365 CXX test/cpp_headers/thread.o 00:06:08.365 CXX test/cpp_headers/trace.o 00:06:08.365 CXX test/cpp_headers/trace_parser.o 00:06:08.624 CXX test/cpp_headers/tree.o 00:06:08.624 CXX test/cpp_headers/ublk.o 00:06:08.624 CXX test/cpp_headers/util.o 00:06:08.624 CXX test/cpp_headers/uuid.o 00:06:08.624 CXX test/cpp_headers/version.o 00:06:08.624 CXX test/cpp_headers/vfio_user_pci.o 00:06:08.624 CXX test/cpp_headers/vfio_user_spec.o 00:06:08.624 CXX test/cpp_headers/vhost.o 00:06:08.624 CXX test/cpp_headers/vmd.o 00:06:08.624 CXX test/cpp_headers/xor.o 00:06:08.624 CXX test/cpp_headers/zipf.o 00:06:11.911 LINK esnap 00:06:12.478 00:06:12.478 real 1m30.309s 00:06:12.478 user 7m50.639s 00:06:12.478 sys 2m5.805s 00:06:12.478 ************************************ 00:06:12.478 END TEST make 00:06:12.478 ************************************ 00:06:12.478 15:32:55 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:12.478 15:32:55 make -- common/autotest_common.sh@10 -- $ set +x 00:06:12.478 15:32:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:12.478 15:32:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:12.478 15:32:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:12.478 15:32:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:12.478 15:32:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:12.478 15:32:55 -- pm/common@44 -- $ pid=5258 00:06:12.478 15:32:55 -- pm/common@50 -- $ kill -TERM 5258 00:06:12.478 15:32:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:12.478 15:32:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:12.478 15:32:55 -- pm/common@44 -- $ pid=5259 00:06:12.478 15:32:55 -- pm/common@50 -- $ kill -TERM 5259 00:06:12.478 15:32:55 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:12.478 15:32:55 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:12.478 15:32:55 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.478 15:32:55 -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.478 15:32:55 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.737 15:32:55 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.737 15:32:55 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.737 15:32:55 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.737 15:32:55 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.737 15:32:55 -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.737 15:32:55 -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.737 15:32:55 -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.737 15:32:55 -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.737 15:32:55 -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.737 15:32:55 -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.737 15:32:55 -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.737 15:32:55 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.737 15:32:55 -- scripts/common.sh@344 -- # case "$op" in 00:06:12.737 15:32:55 -- scripts/common.sh@345 -- # : 1 00:06:12.737 15:32:55 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.737 15:32:55 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.737 15:32:55 -- scripts/common.sh@365 -- # decimal 1 00:06:12.737 15:32:55 -- scripts/common.sh@353 -- # local d=1 00:06:12.737 15:32:55 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.737 15:32:55 -- scripts/common.sh@355 -- # echo 1 00:06:12.737 15:32:55 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.737 15:32:55 -- scripts/common.sh@366 -- # decimal 2 00:06:12.737 15:32:55 -- scripts/common.sh@353 -- # local d=2 00:06:12.737 15:32:55 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.737 15:32:55 -- scripts/common.sh@355 -- # echo 2 00:06:12.737 15:32:55 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.737 15:32:55 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.737 15:32:55 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.737 15:32:55 -- scripts/common.sh@368 -- # return 0 00:06:12.737 15:32:55 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.737 15:32:55 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.737 --rc genhtml_branch_coverage=1 00:06:12.737 --rc genhtml_function_coverage=1 00:06:12.737 --rc genhtml_legend=1 00:06:12.737 --rc geninfo_all_blocks=1 00:06:12.737 --rc geninfo_unexecuted_blocks=1 00:06:12.737 00:06:12.737 ' 00:06:12.737 15:32:55 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.737 --rc genhtml_branch_coverage=1 00:06:12.737 --rc genhtml_function_coverage=1 00:06:12.737 --rc genhtml_legend=1 00:06:12.737 --rc geninfo_all_blocks=1 00:06:12.737 --rc geninfo_unexecuted_blocks=1 00:06:12.737 00:06:12.737 ' 00:06:12.737 15:32:55 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.737 --rc genhtml_branch_coverage=1 00:06:12.737 --rc genhtml_function_coverage=1 00:06:12.737 --rc genhtml_legend=1 00:06:12.737 --rc geninfo_all_blocks=1 00:06:12.737 --rc geninfo_unexecuted_blocks=1 00:06:12.737 00:06:12.737 ' 00:06:12.737 15:32:55 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.737 --rc genhtml_branch_coverage=1 00:06:12.737 --rc genhtml_function_coverage=1 00:06:12.737 --rc genhtml_legend=1 00:06:12.737 --rc geninfo_all_blocks=1 00:06:12.737 --rc geninfo_unexecuted_blocks=1 00:06:12.737 00:06:12.737 ' 00:06:12.737 15:32:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:12.737 15:32:55 -- nvmf/common.sh@7 -- # uname -s 00:06:12.737 15:32:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.737 15:32:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.737 15:32:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.737 15:32:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.737 15:32:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.738 15:32:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.738 15:32:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.738 15:32:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.738 15:32:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.738 15:32:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.738 15:32:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:792076c3-050c-4de8-8516-9038b1df6f80 00:06:12.738 15:32:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=792076c3-050c-4de8-8516-9038b1df6f80 00:06:12.738 15:32:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.738 15:32:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.738 15:32:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:12.738 15:32:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.738 15:32:55 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:12.738 15:32:55 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.738 15:32:55 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.738 15:32:55 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.738 15:32:55 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.738 15:32:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.738 15:32:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.738 15:32:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.738 15:32:55 -- paths/export.sh@5 -- # export PATH 00:06:12.738 15:32:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.738 15:32:55 -- nvmf/common.sh@51 -- # : 0 00:06:12.738 15:32:55 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:12.738 15:32:55 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:12.738 15:32:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.738 15:32:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.738 15:32:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.738 15:32:55 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:12.738 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:12.738 15:32:55 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:12.738 15:32:55 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:12.738 15:32:55 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:12.738 15:32:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:12.738 15:32:55 -- spdk/autotest.sh@32 -- # uname -s 00:06:12.738 15:32:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:12.738 15:32:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:12.738 15:32:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:12.738 15:32:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:12.738 15:32:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:12.738 15:32:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:12.738 15:32:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:12.738 15:32:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:12.738 15:32:55 -- spdk/autotest.sh@48 -- # udevadm_pid=54285 00:06:12.738 15:32:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:12.738 15:32:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:12.738 15:32:55 -- pm/common@17 -- # local monitor 00:06:12.738 15:32:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:12.738 15:32:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:12.738 15:32:55 -- pm/common@25 -- # sleep 1 00:06:12.738 15:32:55 -- pm/common@21 -- # date +%s 00:06:12.738 15:32:55 -- pm/common@21 -- # date +%s 00:06:12.738 15:32:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733499175 00:06:12.738 15:32:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733499175 00:06:12.738 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733499175_collect-cpu-load.pm.log 00:06:12.738 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733499175_collect-vmstat.pm.log 00:06:13.675 15:32:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:13.675 15:32:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:13.675 15:32:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.675 15:32:56 -- common/autotest_common.sh@10 -- # set +x 00:06:13.675 15:32:56 -- spdk/autotest.sh@59 -- # create_test_list 00:06:13.675 15:32:56 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:13.675 15:32:56 -- common/autotest_common.sh@10 -- # set +x 00:06:13.933 15:32:57 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:13.933 15:32:57 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:13.933 15:32:57 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:13.933 15:32:57 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:13.933 15:32:57 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:13.933 15:32:57 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:13.933 15:32:57 -- common/autotest_common.sh@1457 -- # uname 00:06:13.933 15:32:57 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:13.933 15:32:57 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:13.933 15:32:57 -- common/autotest_common.sh@1477 -- # uname 00:06:13.933 15:32:57 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:13.933 15:32:57 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:13.933 15:32:57 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:13.933 lcov: LCOV version 1.15 00:06:13.933 15:32:57 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:28.801 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:28.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:47.009 15:33:27 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:47.009 15:33:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:47.009 15:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:47.009 15:33:27 -- spdk/autotest.sh@78 -- # rm -f 00:06:47.009 15:33:27 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:47.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:47.009 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:47.009 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:47.009 15:33:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:47.009 15:33:28 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:47.009 15:33:28 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:47.009 15:33:28 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:47.009 15:33:28 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:47.009 15:33:28 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:47.009 15:33:28 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:47.009 15:33:28 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:47.009 15:33:28 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:47.009 15:33:28 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:47.009 15:33:28 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:47.009 15:33:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:47.009 15:33:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.009 15:33:28 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:47.009 15:33:28 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:47.009 15:33:28 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:47.009 15:33:28 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:47.009 15:33:28 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:47.009 15:33:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:47.010 15:33:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.010 15:33:28 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:47.010 15:33:28 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:06:47.010 15:33:28 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:47.010 15:33:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:47.010 15:33:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.010 15:33:28 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:47.010 15:33:28 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:06:47.010 15:33:28 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:47.010 15:33:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:47.010 15:33:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.010 15:33:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:47.010 15:33:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:47.010 15:33:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:47.010 15:33:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:47.010 15:33:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:47.010 15:33:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:47.010 No valid GPT data, bailing 00:06:47.010 15:33:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:47.010 15:33:28 -- scripts/common.sh@394 -- # pt= 00:06:47.010 15:33:28 -- scripts/common.sh@395 -- # return 1 00:06:47.010 15:33:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:47.010 1+0 records in 00:06:47.010 1+0 records out 00:06:47.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00715196 s, 147 MB/s 00:06:47.010 15:33:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:47.010 15:33:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:47.010 15:33:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:47.010 15:33:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:47.010 15:33:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:47.010 No valid GPT data, bailing 00:06:47.010 15:33:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:47.010 15:33:28 -- scripts/common.sh@394 -- # pt= 00:06:47.010 15:33:28 -- scripts/common.sh@395 -- # return 1 00:06:47.010 15:33:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:47.010 1+0 records in 00:06:47.010 1+0 records out 00:06:47.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465904 s, 225 MB/s 00:06:47.010 15:33:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:47.010 15:33:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:47.010 15:33:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:47.010 15:33:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:47.010 15:33:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:47.010 No valid GPT data, bailing 00:06:47.010 15:33:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:47.010 15:33:28 -- scripts/common.sh@394 -- # pt= 00:06:47.010 15:33:28 -- scripts/common.sh@395 -- # return 1 00:06:47.010 15:33:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:47.010 1+0 records in 00:06:47.010 1+0 records out 00:06:47.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450631 s, 233 MB/s 00:06:47.010 15:33:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:47.010 15:33:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:47.010 15:33:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:47.010 15:33:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:47.010 15:33:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:47.010 No valid GPT data, bailing 00:06:47.010 15:33:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:47.010 15:33:28 -- scripts/common.sh@394 -- # pt= 00:06:47.010 15:33:28 -- scripts/common.sh@395 -- # return 1 00:06:47.010 15:33:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:47.010 1+0 records in 00:06:47.010 1+0 records out 00:06:47.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00666173 s, 157 MB/s 00:06:47.010 15:33:28 -- spdk/autotest.sh@105 -- # sync 00:06:47.010 15:33:29 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:47.010 15:33:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:47.010 15:33:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:48.941 15:33:32 -- spdk/autotest.sh@111 -- # uname -s 00:06:48.941 15:33:32 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:48.941 15:33:32 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:48.942 15:33:32 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:49.878 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:49.878 Hugepages 00:06:49.878 node hugesize free / total 00:06:49.878 node0 1048576kB 0 / 0 00:06:49.878 node0 2048kB 0 / 0 00:06:49.878 00:06:49.878 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:49.878 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:49.878 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:50.138 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:50.138 15:33:33 -- spdk/autotest.sh@117 -- # uname -s 00:06:50.138 15:33:33 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:50.138 15:33:33 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:50.138 15:33:33 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:51.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:51.074 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:51.074 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:51.333 15:33:34 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:52.281 15:33:35 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:52.281 15:33:35 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:52.281 15:33:35 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:52.281 15:33:35 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:52.281 15:33:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:52.281 15:33:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:52.281 15:33:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:52.281 15:33:35 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:52.281 15:33:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:52.281 15:33:35 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:52.281 15:33:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:52.281 15:33:35 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:52.847 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:52.847 Waiting for block devices as requested 00:06:52.847 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.106 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.106 15:33:36 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:53.106 15:33:36 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:53.106 15:33:36 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:53.106 15:33:36 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:53.106 15:33:36 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:53.106 15:33:36 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:53.106 15:33:36 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:53.106 15:33:36 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:53.106 15:33:36 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:53.106 15:33:36 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:53.106 15:33:36 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:53.106 15:33:36 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:53.106 15:33:36 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:53.106 15:33:36 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:53.106 15:33:36 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:53.106 15:33:36 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:53.106 15:33:36 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:53.106 15:33:36 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:53.106 15:33:36 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:53.106 15:33:36 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:53.106 15:33:36 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:53.106 15:33:36 -- common/autotest_common.sh@1543 -- # continue 00:06:53.106 15:33:36 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:53.106 15:33:36 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:53.106 15:33:36 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:53.106 15:33:36 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:53.106 15:33:36 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:53.106 15:33:36 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:53.106 15:33:36 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:53.106 15:33:36 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:53.106 15:33:36 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:53.106 15:33:36 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:53.106 15:33:36 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:53.106 15:33:36 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:53.106 15:33:36 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:53.366 15:33:36 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:53.366 15:33:36 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:53.366 15:33:36 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:53.366 15:33:36 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:53.366 15:33:36 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:53.366 15:33:36 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:53.366 15:33:36 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:53.366 15:33:36 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:53.366 15:33:36 -- common/autotest_common.sh@1543 -- # continue 00:06:53.366 15:33:36 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:53.366 15:33:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:53.366 15:33:36 -- common/autotest_common.sh@10 -- # set +x 00:06:53.366 15:33:36 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:53.366 15:33:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.366 15:33:36 -- common/autotest_common.sh@10 -- # set +x 00:06:53.366 15:33:36 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:54.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:54.302 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.302 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.302 15:33:37 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:54.302 15:33:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:54.302 15:33:37 -- common/autotest_common.sh@10 -- # set +x 00:06:54.559 15:33:37 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:54.559 15:33:37 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:54.559 15:33:37 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:54.559 15:33:37 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:54.559 15:33:37 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:54.559 15:33:37 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:54.559 15:33:37 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:54.559 15:33:37 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:54.559 15:33:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:54.559 15:33:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:54.559 15:33:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:54.559 15:33:37 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:54.559 15:33:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:54.559 15:33:37 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:54.559 15:33:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:54.559 15:33:37 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:54.559 15:33:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:54.559 15:33:37 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:54.559 15:33:37 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:54.559 15:33:37 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:54.559 15:33:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:54.559 15:33:37 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:54.559 15:33:37 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:54.559 15:33:37 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:54.559 15:33:37 -- common/autotest_common.sh@1572 -- # return 0 00:06:54.559 15:33:37 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:54.559 15:33:37 -- common/autotest_common.sh@1580 -- # return 0 00:06:54.559 15:33:37 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:54.559 15:33:37 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:54.559 15:33:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:54.559 15:33:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:54.559 15:33:37 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:54.559 15:33:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.559 15:33:37 -- common/autotest_common.sh@10 -- # set +x 00:06:54.559 15:33:37 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:54.559 15:33:37 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:54.559 15:33:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.559 15:33:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.559 15:33:37 -- common/autotest_common.sh@10 -- # set +x 00:06:54.559 ************************************ 00:06:54.559 START TEST env 00:06:54.559 ************************************ 00:06:54.559 15:33:37 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:54.817 * Looking for test storage... 00:06:54.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:54.817 15:33:37 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:54.817 15:33:37 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:54.817 15:33:37 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:54.817 15:33:37 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:54.817 15:33:37 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.817 15:33:37 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.817 15:33:37 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.817 15:33:37 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.817 15:33:37 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.817 15:33:37 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.817 15:33:37 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.817 15:33:37 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.817 15:33:37 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.817 15:33:37 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.817 15:33:37 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.817 15:33:37 env -- scripts/common.sh@344 -- # case "$op" in 00:06:54.817 15:33:37 env -- scripts/common.sh@345 -- # : 1 00:06:54.817 15:33:37 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.817 15:33:37 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.817 15:33:37 env -- scripts/common.sh@365 -- # decimal 1 00:06:54.817 15:33:37 env -- scripts/common.sh@353 -- # local d=1 00:06:54.817 15:33:37 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.817 15:33:38 env -- scripts/common.sh@355 -- # echo 1 00:06:54.817 15:33:38 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.817 15:33:38 env -- scripts/common.sh@366 -- # decimal 2 00:06:54.817 15:33:38 env -- scripts/common.sh@353 -- # local d=2 00:06:54.817 15:33:38 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.817 15:33:38 env -- scripts/common.sh@355 -- # echo 2 00:06:54.817 15:33:38 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.817 15:33:38 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.817 15:33:38 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.817 15:33:38 env -- scripts/common.sh@368 -- # return 0 00:06:54.817 15:33:38 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.817 15:33:38 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:54.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.817 --rc genhtml_branch_coverage=1 00:06:54.817 --rc genhtml_function_coverage=1 00:06:54.817 --rc genhtml_legend=1 00:06:54.817 --rc geninfo_all_blocks=1 00:06:54.817 --rc geninfo_unexecuted_blocks=1 00:06:54.817 00:06:54.817 ' 00:06:54.817 15:33:38 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:54.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.817 --rc genhtml_branch_coverage=1 00:06:54.817 --rc genhtml_function_coverage=1 00:06:54.817 --rc genhtml_legend=1 00:06:54.817 --rc geninfo_all_blocks=1 00:06:54.817 --rc geninfo_unexecuted_blocks=1 00:06:54.817 00:06:54.817 ' 00:06:54.817 15:33:38 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:54.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.817 --rc genhtml_branch_coverage=1 00:06:54.817 --rc genhtml_function_coverage=1 00:06:54.817 --rc genhtml_legend=1 00:06:54.817 --rc geninfo_all_blocks=1 00:06:54.817 --rc geninfo_unexecuted_blocks=1 00:06:54.817 00:06:54.817 ' 00:06:54.817 15:33:38 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:54.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.817 --rc genhtml_branch_coverage=1 00:06:54.817 --rc genhtml_function_coverage=1 00:06:54.817 --rc genhtml_legend=1 00:06:54.817 --rc geninfo_all_blocks=1 00:06:54.817 --rc geninfo_unexecuted_blocks=1 00:06:54.817 00:06:54.817 ' 00:06:54.817 15:33:38 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:54.817 15:33:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.817 15:33:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.817 15:33:38 env -- common/autotest_common.sh@10 -- # set +x 00:06:54.817 ************************************ 00:06:54.817 START TEST env_memory 00:06:54.817 ************************************ 00:06:54.817 15:33:38 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:54.817 00:06:54.817 00:06:54.817 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.817 http://cunit.sourceforge.net/ 00:06:54.817 00:06:54.817 00:06:54.817 Suite: memory 00:06:54.817 Test: alloc and free memory map ...[2024-12-06 15:33:38.094229] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:55.074 passed 00:06:55.074 Test: mem map translation ...[2024-12-06 15:33:38.139173] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:55.074 [2024-12-06 15:33:38.139343] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:55.074 [2024-12-06 15:33:38.139485] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:55.074 [2024-12-06 15:33:38.139562] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:55.074 passed 00:06:55.074 Test: mem map registration ...[2024-12-06 15:33:38.207675] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:55.074 [2024-12-06 15:33:38.207833] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:55.074 passed 00:06:55.074 Test: mem map adjacent registrations ...passed 00:06:55.074 00:06:55.074 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.074 suites 1 1 n/a 0 0 00:06:55.074 tests 4 4 4 0 0 00:06:55.074 asserts 152 152 152 0 n/a 00:06:55.074 00:06:55.074 Elapsed time = 0.242 seconds 00:06:55.074 00:06:55.074 ************************************ 00:06:55.074 END TEST env_memory 00:06:55.074 ************************************ 00:06:55.074 real 0m0.299s 00:06:55.074 user 0m0.253s 00:06:55.075 sys 0m0.035s 00:06:55.075 15:33:38 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.075 15:33:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:55.332 15:33:38 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:55.332 15:33:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.332 15:33:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.332 15:33:38 env -- common/autotest_common.sh@10 -- # set +x 00:06:55.332 ************************************ 00:06:55.332 START TEST env_vtophys 00:06:55.332 ************************************ 00:06:55.332 15:33:38 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:55.332 EAL: lib.eal log level changed from notice to debug 00:06:55.332 EAL: Detected lcore 0 as core 0 on socket 0 00:06:55.332 EAL: Detected lcore 1 as core 0 on socket 0 00:06:55.332 EAL: Detected lcore 2 as core 0 on socket 0 00:06:55.332 EAL: Detected lcore 3 as core 0 on socket 0 00:06:55.332 EAL: Detected lcore 4 as core 0 on socket 0 00:06:55.332 EAL: Detected lcore 5 as core 0 on socket 0 00:06:55.332 EAL: Detected lcore 6 as core 0 on socket 0 00:06:55.332 EAL: Detected lcore 7 as core 0 on socket 0 00:06:55.332 EAL: Detected lcore 8 as core 0 on socket 0 00:06:55.332 EAL: Detected lcore 9 as core 0 on socket 0 00:06:55.332 EAL: Maximum logical cores by configuration: 128 00:06:55.332 EAL: Detected CPU lcores: 10 00:06:55.332 EAL: Detected NUMA nodes: 1 00:06:55.332 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:55.332 EAL: Detected shared linkage of DPDK 00:06:55.332 EAL: No shared files mode enabled, IPC will be disabled 00:06:55.332 EAL: Selected IOVA mode 'PA' 00:06:55.332 EAL: Probing VFIO support... 00:06:55.332 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:55.332 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:55.332 EAL: Ask a virtual area of 0x2e000 bytes 00:06:55.332 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:55.332 EAL: Setting up physically contiguous memory... 00:06:55.332 EAL: Setting maximum number of open files to 524288 00:06:55.332 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:55.332 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:55.332 EAL: Ask a virtual area of 0x61000 bytes 00:06:55.332 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:55.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:55.332 EAL: Ask a virtual area of 0x400000000 bytes 00:06:55.332 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:55.332 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:55.332 EAL: Ask a virtual area of 0x61000 bytes 00:06:55.332 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:55.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:55.332 EAL: Ask a virtual area of 0x400000000 bytes 00:06:55.332 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:55.332 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:55.332 EAL: Ask a virtual area of 0x61000 bytes 00:06:55.332 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:55.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:55.332 EAL: Ask a virtual area of 0x400000000 bytes 00:06:55.332 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:55.332 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:55.332 EAL: Ask a virtual area of 0x61000 bytes 00:06:55.332 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:55.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:55.332 EAL: Ask a virtual area of 0x400000000 bytes 00:06:55.332 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:55.332 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:55.332 EAL: Hugepages will be freed exactly as allocated. 00:06:55.332 EAL: No shared files mode enabled, IPC is disabled 00:06:55.332 EAL: No shared files mode enabled, IPC is disabled 00:06:55.332 EAL: TSC frequency is ~2490000 KHz 00:06:55.332 EAL: Main lcore 0 is ready (tid=7ff28c6f9a40;cpuset=[0]) 00:06:55.332 EAL: Trying to obtain current memory policy. 00:06:55.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.332 EAL: Restoring previous memory policy: 0 00:06:55.332 EAL: request: mp_malloc_sync 00:06:55.332 EAL: No shared files mode enabled, IPC is disabled 00:06:55.332 EAL: Heap on socket 0 was expanded by 2MB 00:06:55.332 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:55.332 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:55.332 EAL: Mem event callback 'spdk:(nil)' registered 00:06:55.332 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:55.589 00:06:55.589 00:06:55.589 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.589 http://cunit.sourceforge.net/ 00:06:55.589 00:06:55.589 00:06:55.589 Suite: components_suite 00:06:56.157 Test: vtophys_malloc_test ...passed 00:06:56.157 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:56.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:56.157 EAL: Restoring previous memory policy: 4 00:06:56.157 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.157 EAL: request: mp_malloc_sync 00:06:56.157 EAL: No shared files mode enabled, IPC is disabled 00:06:56.157 EAL: Heap on socket 0 was expanded by 4MB 00:06:56.157 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.157 EAL: request: mp_malloc_sync 00:06:56.157 EAL: No shared files mode enabled, IPC is disabled 00:06:56.157 EAL: Heap on socket 0 was shrunk by 4MB 00:06:56.157 EAL: Trying to obtain current memory policy. 00:06:56.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:56.157 EAL: Restoring previous memory policy: 4 00:06:56.157 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.157 EAL: request: mp_malloc_sync 00:06:56.157 EAL: No shared files mode enabled, IPC is disabled 00:06:56.157 EAL: Heap on socket 0 was expanded by 6MB 00:06:56.157 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.157 EAL: request: mp_malloc_sync 00:06:56.157 EAL: No shared files mode enabled, IPC is disabled 00:06:56.157 EAL: Heap on socket 0 was shrunk by 6MB 00:06:56.157 EAL: Trying to obtain current memory policy. 00:06:56.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:56.157 EAL: Restoring previous memory policy: 4 00:06:56.157 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.157 EAL: request: mp_malloc_sync 00:06:56.157 EAL: No shared files mode enabled, IPC is disabled 00:06:56.157 EAL: Heap on socket 0 was expanded by 10MB 00:06:56.157 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.157 EAL: request: mp_malloc_sync 00:06:56.157 EAL: No shared files mode enabled, IPC is disabled 00:06:56.157 EAL: Heap on socket 0 was shrunk by 10MB 00:06:56.157 EAL: Trying to obtain current memory policy. 00:06:56.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:56.157 EAL: Restoring previous memory policy: 4 00:06:56.157 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.157 EAL: request: mp_malloc_sync 00:06:56.157 EAL: No shared files mode enabled, IPC is disabled 00:06:56.157 EAL: Heap on socket 0 was expanded by 18MB 00:06:56.157 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.157 EAL: request: mp_malloc_sync 00:06:56.157 EAL: No shared files mode enabled, IPC is disabled 00:06:56.157 EAL: Heap on socket 0 was shrunk by 18MB 00:06:56.157 EAL: Trying to obtain current memory policy. 00:06:56.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:56.157 EAL: Restoring previous memory policy: 4 00:06:56.157 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.157 EAL: request: mp_malloc_sync 00:06:56.157 EAL: No shared files mode enabled, IPC is disabled 00:06:56.157 EAL: Heap on socket 0 was expanded by 34MB 00:06:56.157 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.157 EAL: request: mp_malloc_sync 00:06:56.157 EAL: No shared files mode enabled, IPC is disabled 00:06:56.157 EAL: Heap on socket 0 was shrunk by 34MB 00:06:56.415 EAL: Trying to obtain current memory policy. 00:06:56.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:56.415 EAL: Restoring previous memory policy: 4 00:06:56.415 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.415 EAL: request: mp_malloc_sync 00:06:56.415 EAL: No shared files mode enabled, IPC is disabled 00:06:56.415 EAL: Heap on socket 0 was expanded by 66MB 00:06:56.415 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.415 EAL: request: mp_malloc_sync 00:06:56.415 EAL: No shared files mode enabled, IPC is disabled 00:06:56.415 EAL: Heap on socket 0 was shrunk by 66MB 00:06:56.673 EAL: Trying to obtain current memory policy. 00:06:56.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:56.673 EAL: Restoring previous memory policy: 4 00:06:56.673 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.673 EAL: request: mp_malloc_sync 00:06:56.673 EAL: No shared files mode enabled, IPC is disabled 00:06:56.673 EAL: Heap on socket 0 was expanded by 130MB 00:06:56.931 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.931 EAL: request: mp_malloc_sync 00:06:56.931 EAL: No shared files mode enabled, IPC is disabled 00:06:56.931 EAL: Heap on socket 0 was shrunk by 130MB 00:06:57.190 EAL: Trying to obtain current memory policy. 00:06:57.190 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.190 EAL: Restoring previous memory policy: 4 00:06:57.190 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.190 EAL: request: mp_malloc_sync 00:06:57.190 EAL: No shared files mode enabled, IPC is disabled 00:06:57.190 EAL: Heap on socket 0 was expanded by 258MB 00:06:57.771 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.771 EAL: request: mp_malloc_sync 00:06:57.771 EAL: No shared files mode enabled, IPC is disabled 00:06:57.771 EAL: Heap on socket 0 was shrunk by 258MB 00:06:58.366 EAL: Trying to obtain current memory policy. 00:06:58.366 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.366 EAL: Restoring previous memory policy: 4 00:06:58.366 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.366 EAL: request: mp_malloc_sync 00:06:58.366 EAL: No shared files mode enabled, IPC is disabled 00:06:58.366 EAL: Heap on socket 0 was expanded by 514MB 00:06:59.300 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.559 EAL: request: mp_malloc_sync 00:06:59.559 EAL: No shared files mode enabled, IPC is disabled 00:06:59.559 EAL: Heap on socket 0 was shrunk by 514MB 00:07:00.494 EAL: Trying to obtain current memory policy. 00:07:00.494 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.753 EAL: Restoring previous memory policy: 4 00:07:00.753 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.753 EAL: request: mp_malloc_sync 00:07:00.753 EAL: No shared files mode enabled, IPC is disabled 00:07:00.753 EAL: Heap on socket 0 was expanded by 1026MB 00:07:02.671 EAL: Calling mem event callback 'spdk:(nil)' 00:07:02.944 EAL: request: mp_malloc_sync 00:07:02.944 EAL: No shared files mode enabled, IPC is disabled 00:07:02.944 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:04.871 passed 00:07:04.871 00:07:04.871 Run Summary: Type Total Ran Passed Failed Inactive 00:07:04.871 suites 1 1 n/a 0 0 00:07:04.871 tests 2 2 2 0 0 00:07:04.871 asserts 5824 5824 5824 0 n/a 00:07:04.871 00:07:04.871 Elapsed time = 9.109 seconds 00:07:04.871 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.871 EAL: request: mp_malloc_sync 00:07:04.871 EAL: No shared files mode enabled, IPC is disabled 00:07:04.871 EAL: Heap on socket 0 was shrunk by 2MB 00:07:04.871 EAL: No shared files mode enabled, IPC is disabled 00:07:04.871 EAL: No shared files mode enabled, IPC is disabled 00:07:04.871 EAL: No shared files mode enabled, IPC is disabled 00:07:04.871 00:07:04.871 real 0m9.468s 00:07:04.871 user 0m7.935s 00:07:04.871 sys 0m1.360s 00:07:04.871 ************************************ 00:07:04.871 END TEST env_vtophys 00:07:04.871 ************************************ 00:07:04.871 15:33:47 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.871 15:33:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:04.871 15:33:47 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:04.871 15:33:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.871 15:33:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.871 15:33:47 env -- common/autotest_common.sh@10 -- # set +x 00:07:04.871 ************************************ 00:07:04.871 START TEST env_pci 00:07:04.871 ************************************ 00:07:04.871 15:33:47 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:04.871 00:07:04.871 00:07:04.871 CUnit - A unit testing framework for C - Version 2.1-3 00:07:04.871 http://cunit.sourceforge.net/ 00:07:04.871 00:07:04.871 00:07:04.871 Suite: pci 00:07:04.871 Test: pci_hook ...[2024-12-06 15:33:47.997828] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56632 has claimed it 00:07:04.871 passed 00:07:04.871 00:07:04.871 Run Summary: Type Total Ran Passed Failed Inactive 00:07:04.871 suites 1 1 n/a 0 0 00:07:04.871 tests 1 1 1 0 0 00:07:04.871 asserts 25 25 25 0 n/a 00:07:04.871 00:07:04.871 Elapsed time = 0.008 seconds 00:07:04.871 EAL: Cannot find device (10000:00:01.0) 00:07:04.871 EAL: Failed to attach device on primary process 00:07:04.871 00:07:04.871 real 0m0.128s 00:07:04.871 user 0m0.044s 00:07:04.871 sys 0m0.082s 00:07:04.871 15:33:48 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.871 15:33:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:04.871 ************************************ 00:07:04.871 END TEST env_pci 00:07:04.871 ************************************ 00:07:04.871 15:33:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:04.871 15:33:48 env -- env/env.sh@15 -- # uname 00:07:04.871 15:33:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:04.871 15:33:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:04.871 15:33:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:04.871 15:33:48 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:04.871 15:33:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.871 15:33:48 env -- common/autotest_common.sh@10 -- # set +x 00:07:04.871 ************************************ 00:07:04.871 START TEST env_dpdk_post_init 00:07:04.871 ************************************ 00:07:04.871 15:33:48 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:05.138 EAL: Detected CPU lcores: 10 00:07:05.138 EAL: Detected NUMA nodes: 1 00:07:05.138 EAL: Detected shared linkage of DPDK 00:07:05.138 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:05.138 EAL: Selected IOVA mode 'PA' 00:07:05.138 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:05.138 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:05.138 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:05.406 Starting DPDK initialization... 00:07:05.406 Starting SPDK post initialization... 00:07:05.406 SPDK NVMe probe 00:07:05.406 Attaching to 0000:00:10.0 00:07:05.406 Attaching to 0000:00:11.0 00:07:05.406 Attached to 0000:00:10.0 00:07:05.406 Attached to 0000:00:11.0 00:07:05.406 Cleaning up... 00:07:05.406 00:07:05.406 real 0m0.308s 00:07:05.406 user 0m0.090s 00:07:05.406 sys 0m0.119s 00:07:05.407 ************************************ 00:07:05.407 END TEST env_dpdk_post_init 00:07:05.407 ************************************ 00:07:05.407 15:33:48 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.407 15:33:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:05.407 15:33:48 env -- env/env.sh@26 -- # uname 00:07:05.407 15:33:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:05.407 15:33:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:05.407 15:33:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.407 15:33:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.407 15:33:48 env -- common/autotest_common.sh@10 -- # set +x 00:07:05.407 ************************************ 00:07:05.407 START TEST env_mem_callbacks 00:07:05.407 ************************************ 00:07:05.407 15:33:48 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:05.407 EAL: Detected CPU lcores: 10 00:07:05.407 EAL: Detected NUMA nodes: 1 00:07:05.407 EAL: Detected shared linkage of DPDK 00:07:05.407 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:05.407 EAL: Selected IOVA mode 'PA' 00:07:05.668 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:05.668 00:07:05.668 00:07:05.668 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.668 http://cunit.sourceforge.net/ 00:07:05.668 00:07:05.668 00:07:05.668 Suite: memory 00:07:05.668 Test: test ... 00:07:05.668 register 0x200000200000 2097152 00:07:05.668 malloc 3145728 00:07:05.668 register 0x200000400000 4194304 00:07:05.668 buf 0x2000004fffc0 len 3145728 PASSED 00:07:05.668 malloc 64 00:07:05.668 buf 0x2000004ffec0 len 64 PASSED 00:07:05.668 malloc 4194304 00:07:05.668 register 0x200000800000 6291456 00:07:05.668 buf 0x2000009fffc0 len 4194304 PASSED 00:07:05.668 free 0x2000004fffc0 3145728 00:07:05.668 free 0x2000004ffec0 64 00:07:05.668 unregister 0x200000400000 4194304 PASSED 00:07:05.668 free 0x2000009fffc0 4194304 00:07:05.668 unregister 0x200000800000 6291456 PASSED 00:07:05.668 malloc 8388608 00:07:05.668 register 0x200000400000 10485760 00:07:05.668 buf 0x2000005fffc0 len 8388608 PASSED 00:07:05.668 free 0x2000005fffc0 8388608 00:07:05.668 unregister 0x200000400000 10485760 PASSED 00:07:05.668 passed 00:07:05.668 00:07:05.668 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.668 suites 1 1 n/a 0 0 00:07:05.668 tests 1 1 1 0 0 00:07:05.668 asserts 15 15 15 0 n/a 00:07:05.668 00:07:05.668 Elapsed time = 0.082 seconds 00:07:05.668 00:07:05.668 real 0m0.311s 00:07:05.668 user 0m0.128s 00:07:05.668 sys 0m0.079s 00:07:05.668 15:33:48 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.668 15:33:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:05.668 ************************************ 00:07:05.668 END TEST env_mem_callbacks 00:07:05.668 ************************************ 00:07:05.668 00:07:05.668 real 0m11.147s 00:07:05.668 user 0m8.710s 00:07:05.668 sys 0m2.058s 00:07:05.668 15:33:48 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.668 15:33:48 env -- common/autotest_common.sh@10 -- # set +x 00:07:05.668 ************************************ 00:07:05.668 END TEST env 00:07:05.668 ************************************ 00:07:05.925 15:33:48 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:05.925 15:33:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.925 15:33:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.925 15:33:48 -- common/autotest_common.sh@10 -- # set +x 00:07:05.925 ************************************ 00:07:05.925 START TEST rpc 00:07:05.925 ************************************ 00:07:05.925 15:33:48 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:05.925 * Looking for test storage... 00:07:05.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:05.925 15:33:49 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:05.925 15:33:49 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:05.925 15:33:49 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:05.925 15:33:49 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:05.925 15:33:49 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.925 15:33:49 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.925 15:33:49 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.925 15:33:49 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.925 15:33:49 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.925 15:33:49 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.925 15:33:49 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.925 15:33:49 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.925 15:33:49 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.925 15:33:49 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.925 15:33:49 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.925 15:33:49 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:05.925 15:33:49 rpc -- scripts/common.sh@345 -- # : 1 00:07:05.925 15:33:49 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.925 15:33:49 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.925 15:33:49 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:05.925 15:33:49 rpc -- scripts/common.sh@353 -- # local d=1 00:07:05.926 15:33:49 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.926 15:33:49 rpc -- scripts/common.sh@355 -- # echo 1 00:07:05.926 15:33:49 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.183 15:33:49 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:06.183 15:33:49 rpc -- scripts/common.sh@353 -- # local d=2 00:07:06.183 15:33:49 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.183 15:33:49 rpc -- scripts/common.sh@355 -- # echo 2 00:07:06.183 15:33:49 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.183 15:33:49 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.183 15:33:49 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.183 15:33:49 rpc -- scripts/common.sh@368 -- # return 0 00:07:06.183 15:33:49 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.184 15:33:49 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:06.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.184 --rc genhtml_branch_coverage=1 00:07:06.184 --rc genhtml_function_coverage=1 00:07:06.184 --rc genhtml_legend=1 00:07:06.184 --rc geninfo_all_blocks=1 00:07:06.184 --rc geninfo_unexecuted_blocks=1 00:07:06.184 00:07:06.184 ' 00:07:06.184 15:33:49 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:06.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.184 --rc genhtml_branch_coverage=1 00:07:06.184 --rc genhtml_function_coverage=1 00:07:06.184 --rc genhtml_legend=1 00:07:06.184 --rc geninfo_all_blocks=1 00:07:06.184 --rc geninfo_unexecuted_blocks=1 00:07:06.184 00:07:06.184 ' 00:07:06.184 15:33:49 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:06.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.184 --rc genhtml_branch_coverage=1 00:07:06.184 --rc genhtml_function_coverage=1 00:07:06.184 --rc genhtml_legend=1 00:07:06.184 --rc geninfo_all_blocks=1 00:07:06.184 --rc geninfo_unexecuted_blocks=1 00:07:06.184 00:07:06.184 ' 00:07:06.184 15:33:49 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:06.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.184 --rc genhtml_branch_coverage=1 00:07:06.184 --rc genhtml_function_coverage=1 00:07:06.184 --rc genhtml_legend=1 00:07:06.184 --rc geninfo_all_blocks=1 00:07:06.184 --rc geninfo_unexecuted_blocks=1 00:07:06.184 00:07:06.184 ' 00:07:06.184 15:33:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56763 00:07:06.184 15:33:49 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:06.184 15:33:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:06.184 15:33:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56763 00:07:06.184 15:33:49 rpc -- common/autotest_common.sh@835 -- # '[' -z 56763 ']' 00:07:06.184 15:33:49 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.184 15:33:49 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.184 15:33:49 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.184 15:33:49 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.184 15:33:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.184 [2024-12-06 15:33:49.359606] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:07:06.184 [2024-12-06 15:33:49.359971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56763 ] 00:07:06.442 [2024-12-06 15:33:49.550458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.442 [2024-12-06 15:33:49.695878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:06.442 [2024-12-06 15:33:49.696143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56763' to capture a snapshot of events at runtime. 00:07:06.442 [2024-12-06 15:33:49.696256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.442 [2024-12-06 15:33:49.696314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.442 [2024-12-06 15:33:49.696346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56763 for offline analysis/debug. 00:07:06.442 [2024-12-06 15:33:49.697810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.819 15:33:50 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.819 15:33:50 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:07.819 15:33:50 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:07.819 15:33:50 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:07.819 15:33:50 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:07.819 15:33:50 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:07.819 15:33:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.819 15:33:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.819 15:33:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.819 ************************************ 00:07:07.819 START TEST rpc_integrity 00:07:07.819 ************************************ 00:07:07.819 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:07.819 15:33:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:07.819 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.819 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.819 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.819 15:33:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:07.819 15:33:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:07.819 15:33:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:07.819 15:33:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:07.819 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.819 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.819 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.819 15:33:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:07.819 15:33:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:07.819 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.819 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.819 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.819 15:33:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:07.819 { 00:07:07.819 "name": "Malloc0", 00:07:07.819 "aliases": [ 00:07:07.819 "43b8fdc5-7105-4c8c-8785-e1f549070ba9" 00:07:07.819 ], 00:07:07.819 "product_name": "Malloc disk", 00:07:07.819 "block_size": 512, 00:07:07.819 "num_blocks": 16384, 00:07:07.819 "uuid": "43b8fdc5-7105-4c8c-8785-e1f549070ba9", 00:07:07.819 "assigned_rate_limits": { 00:07:07.819 "rw_ios_per_sec": 0, 00:07:07.819 "rw_mbytes_per_sec": 0, 00:07:07.819 "r_mbytes_per_sec": 0, 00:07:07.819 "w_mbytes_per_sec": 0 00:07:07.819 }, 00:07:07.819 "claimed": false, 00:07:07.819 "zoned": false, 00:07:07.819 "supported_io_types": { 00:07:07.819 "read": true, 00:07:07.819 "write": true, 00:07:07.819 "unmap": true, 00:07:07.819 "flush": true, 00:07:07.819 "reset": true, 00:07:07.819 "nvme_admin": false, 00:07:07.819 "nvme_io": false, 00:07:07.819 "nvme_io_md": false, 00:07:07.819 "write_zeroes": true, 00:07:07.819 "zcopy": true, 00:07:07.819 "get_zone_info": false, 00:07:07.819 "zone_management": false, 00:07:07.819 "zone_append": false, 00:07:07.819 "compare": false, 00:07:07.819 "compare_and_write": false, 00:07:07.819 "abort": true, 00:07:07.819 "seek_hole": false, 00:07:07.819 "seek_data": false, 00:07:07.819 "copy": true, 00:07:07.819 "nvme_iov_md": false 00:07:07.819 }, 00:07:07.819 "memory_domains": [ 00:07:07.819 { 00:07:07.819 "dma_device_id": "system", 00:07:07.819 "dma_device_type": 1 00:07:07.819 }, 00:07:07.819 { 00:07:07.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.819 "dma_device_type": 2 00:07:07.819 } 00:07:07.819 ], 00:07:07.819 "driver_specific": {} 00:07:07.819 } 00:07:07.819 ]' 00:07:07.819 15:33:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:07.819 15:33:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:07.819 15:33:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:07.820 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.820 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.820 [2024-12-06 15:33:50.926961] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:07.820 [2024-12-06 15:33:50.927056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.820 [2024-12-06 15:33:50.927094] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:07.820 [2024-12-06 15:33:50.927117] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.820 [2024-12-06 15:33:50.930274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.820 [2024-12-06 15:33:50.930328] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:07.820 Passthru0 00:07:07.820 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.820 15:33:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:07.820 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.820 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.820 15:33:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.820 15:33:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:07.820 { 00:07:07.820 "name": "Malloc0", 00:07:07.820 "aliases": [ 00:07:07.820 "43b8fdc5-7105-4c8c-8785-e1f549070ba9" 00:07:07.820 ], 00:07:07.820 "product_name": "Malloc disk", 00:07:07.820 "block_size": 512, 00:07:07.820 "num_blocks": 16384, 00:07:07.820 "uuid": "43b8fdc5-7105-4c8c-8785-e1f549070ba9", 00:07:07.820 "assigned_rate_limits": { 00:07:07.820 "rw_ios_per_sec": 0, 00:07:07.820 "rw_mbytes_per_sec": 0, 00:07:07.820 "r_mbytes_per_sec": 0, 00:07:07.820 "w_mbytes_per_sec": 0 00:07:07.820 }, 00:07:07.820 "claimed": true, 00:07:07.820 "claim_type": "exclusive_write", 00:07:07.820 "zoned": false, 00:07:07.820 "supported_io_types": { 00:07:07.820 "read": true, 00:07:07.820 "write": true, 00:07:07.820 "unmap": true, 00:07:07.820 "flush": true, 00:07:07.820 "reset": true, 00:07:07.820 "nvme_admin": false, 00:07:07.820 "nvme_io": false, 00:07:07.820 "nvme_io_md": false, 00:07:07.820 "write_zeroes": true, 00:07:07.820 "zcopy": true, 00:07:07.820 "get_zone_info": false, 00:07:07.820 "zone_management": false, 00:07:07.820 "zone_append": false, 00:07:07.820 "compare": false, 00:07:07.820 "compare_and_write": false, 00:07:07.820 "abort": true, 00:07:07.820 "seek_hole": false, 00:07:07.820 "seek_data": false, 00:07:07.820 "copy": true, 00:07:07.820 "nvme_iov_md": false 00:07:07.820 }, 00:07:07.820 "memory_domains": [ 00:07:07.820 { 00:07:07.820 "dma_device_id": "system", 00:07:07.820 "dma_device_type": 1 00:07:07.820 }, 00:07:07.820 { 00:07:07.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.820 "dma_device_type": 2 00:07:07.820 } 00:07:07.820 ], 00:07:07.820 "driver_specific": {} 00:07:07.820 }, 00:07:07.820 { 00:07:07.820 "name": "Passthru0", 00:07:07.820 "aliases": [ 00:07:07.820 "cf137986-b2da-5744-b0c9-82a872a86249" 00:07:07.820 ], 00:07:07.820 "product_name": "passthru", 00:07:07.820 "block_size": 512, 00:07:07.820 "num_blocks": 16384, 00:07:07.820 "uuid": "cf137986-b2da-5744-b0c9-82a872a86249", 00:07:07.820 "assigned_rate_limits": { 00:07:07.820 "rw_ios_per_sec": 0, 00:07:07.820 "rw_mbytes_per_sec": 0, 00:07:07.820 "r_mbytes_per_sec": 0, 00:07:07.820 "w_mbytes_per_sec": 0 00:07:07.820 }, 00:07:07.820 "claimed": false, 00:07:07.820 "zoned": false, 00:07:07.820 "supported_io_types": { 00:07:07.820 "read": true, 00:07:07.820 "write": true, 00:07:07.820 "unmap": true, 00:07:07.820 "flush": true, 00:07:07.820 "reset": true, 00:07:07.820 "nvme_admin": false, 00:07:07.820 "nvme_io": false, 00:07:07.820 "nvme_io_md": false, 00:07:07.820 "write_zeroes": true, 00:07:07.820 "zcopy": true, 00:07:07.820 "get_zone_info": false, 00:07:07.820 "zone_management": false, 00:07:07.820 "zone_append": false, 00:07:07.820 "compare": false, 00:07:07.820 "compare_and_write": false, 00:07:07.820 "abort": true, 00:07:07.820 "seek_hole": false, 00:07:07.820 "seek_data": false, 00:07:07.820 "copy": true, 00:07:07.820 "nvme_iov_md": false 00:07:07.820 }, 00:07:07.820 "memory_domains": [ 00:07:07.820 { 00:07:07.820 "dma_device_id": "system", 00:07:07.820 "dma_device_type": 1 00:07:07.820 }, 00:07:07.820 { 00:07:07.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.820 "dma_device_type": 2 00:07:07.820 } 00:07:07.820 ], 00:07:07.820 "driver_specific": { 00:07:07.820 "passthru": { 00:07:07.820 "name": "Passthru0", 00:07:07.820 "base_bdev_name": "Malloc0" 00:07:07.820 } 00:07:07.820 } 00:07:07.820 } 00:07:07.820 ]' 00:07:07.820 15:33:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:07.820 15:33:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:07.820 15:33:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:07.820 15:33:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.820 15:33:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.820 15:33:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.820 15:33:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:07.820 15:33:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.820 15:33:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.820 15:33:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.820 15:33:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:07.820 15:33:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.820 15:33:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.820 15:33:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.820 15:33:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:07.820 15:33:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:08.079 ************************************ 00:07:08.079 END TEST rpc_integrity 00:07:08.079 ************************************ 00:07:08.079 15:33:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:08.079 00:07:08.079 real 0m0.364s 00:07:08.079 user 0m0.192s 00:07:08.079 sys 0m0.071s 00:07:08.079 15:33:51 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.079 15:33:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.079 15:33:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:08.079 15:33:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.079 15:33:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.079 15:33:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.079 ************************************ 00:07:08.079 START TEST rpc_plugins 00:07:08.079 ************************************ 00:07:08.079 15:33:51 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:08.079 15:33:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:08.079 15:33:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.079 15:33:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:08.079 15:33:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.079 15:33:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:08.079 15:33:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:08.079 15:33:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.079 15:33:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:08.079 15:33:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.079 15:33:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:08.079 { 00:07:08.079 "name": "Malloc1", 00:07:08.079 "aliases": [ 00:07:08.079 "8ccdd2eb-eb26-489d-9426-1f90b2530e71" 00:07:08.079 ], 00:07:08.079 "product_name": "Malloc disk", 00:07:08.079 "block_size": 4096, 00:07:08.079 "num_blocks": 256, 00:07:08.079 "uuid": "8ccdd2eb-eb26-489d-9426-1f90b2530e71", 00:07:08.079 "assigned_rate_limits": { 00:07:08.079 "rw_ios_per_sec": 0, 00:07:08.079 "rw_mbytes_per_sec": 0, 00:07:08.079 "r_mbytes_per_sec": 0, 00:07:08.079 "w_mbytes_per_sec": 0 00:07:08.079 }, 00:07:08.079 "claimed": false, 00:07:08.079 "zoned": false, 00:07:08.079 "supported_io_types": { 00:07:08.079 "read": true, 00:07:08.079 "write": true, 00:07:08.079 "unmap": true, 00:07:08.079 "flush": true, 00:07:08.079 "reset": true, 00:07:08.079 "nvme_admin": false, 00:07:08.079 "nvme_io": false, 00:07:08.079 "nvme_io_md": false, 00:07:08.079 "write_zeroes": true, 00:07:08.079 "zcopy": true, 00:07:08.079 "get_zone_info": false, 00:07:08.079 "zone_management": false, 00:07:08.079 "zone_append": false, 00:07:08.079 "compare": false, 00:07:08.079 "compare_and_write": false, 00:07:08.079 "abort": true, 00:07:08.079 "seek_hole": false, 00:07:08.079 "seek_data": false, 00:07:08.079 "copy": true, 00:07:08.079 "nvme_iov_md": false 00:07:08.079 }, 00:07:08.079 "memory_domains": [ 00:07:08.079 { 00:07:08.079 "dma_device_id": "system", 00:07:08.079 "dma_device_type": 1 00:07:08.079 }, 00:07:08.079 { 00:07:08.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.079 "dma_device_type": 2 00:07:08.079 } 00:07:08.079 ], 00:07:08.079 "driver_specific": {} 00:07:08.079 } 00:07:08.079 ]' 00:07:08.079 15:33:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:08.079 15:33:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:08.079 15:33:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:08.079 15:33:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.079 15:33:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:08.079 15:33:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.079 15:33:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:08.079 15:33:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.079 15:33:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:08.079 15:33:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.079 15:33:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:08.079 15:33:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:08.079 ************************************ 00:07:08.079 END TEST rpc_plugins 00:07:08.079 ************************************ 00:07:08.079 15:33:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:08.079 00:07:08.079 real 0m0.182s 00:07:08.079 user 0m0.105s 00:07:08.079 sys 0m0.027s 00:07:08.079 15:33:51 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.079 15:33:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:08.337 15:33:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:08.338 15:33:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.338 15:33:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.338 15:33:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.338 ************************************ 00:07:08.338 START TEST rpc_trace_cmd_test 00:07:08.338 ************************************ 00:07:08.338 15:33:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:08.338 15:33:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:08.338 15:33:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:08.338 15:33:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.338 15:33:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.338 15:33:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.338 15:33:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:08.338 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56763", 00:07:08.338 "tpoint_group_mask": "0x8", 00:07:08.338 "iscsi_conn": { 00:07:08.338 "mask": "0x2", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 }, 00:07:08.338 "scsi": { 00:07:08.338 "mask": "0x4", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 }, 00:07:08.338 "bdev": { 00:07:08.338 "mask": "0x8", 00:07:08.338 "tpoint_mask": "0xffffffffffffffff" 00:07:08.338 }, 00:07:08.338 "nvmf_rdma": { 00:07:08.338 "mask": "0x10", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 }, 00:07:08.338 "nvmf_tcp": { 00:07:08.338 "mask": "0x20", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 }, 00:07:08.338 "ftl": { 00:07:08.338 "mask": "0x40", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 }, 00:07:08.338 "blobfs": { 00:07:08.338 "mask": "0x80", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 }, 00:07:08.338 "dsa": { 00:07:08.338 "mask": "0x200", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 }, 00:07:08.338 "thread": { 00:07:08.338 "mask": "0x400", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 }, 00:07:08.338 "nvme_pcie": { 00:07:08.338 "mask": "0x800", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 }, 00:07:08.338 "iaa": { 00:07:08.338 "mask": "0x1000", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 }, 00:07:08.338 "nvme_tcp": { 00:07:08.338 "mask": "0x2000", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 }, 00:07:08.338 "bdev_nvme": { 00:07:08.338 "mask": "0x4000", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 }, 00:07:08.338 "sock": { 00:07:08.338 "mask": "0x8000", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 }, 00:07:08.338 "blob": { 00:07:08.338 "mask": "0x10000", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 }, 00:07:08.338 "bdev_raid": { 00:07:08.338 "mask": "0x20000", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 }, 00:07:08.338 "scheduler": { 00:07:08.338 "mask": "0x40000", 00:07:08.338 "tpoint_mask": "0x0" 00:07:08.338 } 00:07:08.338 }' 00:07:08.338 15:33:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:08.338 15:33:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:08.338 15:33:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:08.338 15:33:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:08.338 15:33:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:08.338 15:33:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:08.338 15:33:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:08.596 15:33:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:08.596 15:33:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:08.596 ************************************ 00:07:08.596 END TEST rpc_trace_cmd_test 00:07:08.596 ************************************ 00:07:08.596 15:33:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:08.596 00:07:08.596 real 0m0.231s 00:07:08.596 user 0m0.171s 00:07:08.596 sys 0m0.050s 00:07:08.596 15:33:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.596 15:33:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.596 15:33:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:08.596 15:33:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:08.596 15:33:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:08.596 15:33:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.596 15:33:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.596 15:33:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.596 ************************************ 00:07:08.596 START TEST rpc_daemon_integrity 00:07:08.596 ************************************ 00:07:08.596 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:08.596 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:08.596 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.596 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.596 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.596 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:08.596 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:08.596 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:08.596 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:08.596 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.596 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.596 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.596 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:08.597 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:08.597 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.597 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.597 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.597 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:08.597 { 00:07:08.597 "name": "Malloc2", 00:07:08.597 "aliases": [ 00:07:08.597 "f613aab3-863b-4d18-8906-6b5c25ad6d27" 00:07:08.597 ], 00:07:08.597 "product_name": "Malloc disk", 00:07:08.597 "block_size": 512, 00:07:08.597 "num_blocks": 16384, 00:07:08.597 "uuid": "f613aab3-863b-4d18-8906-6b5c25ad6d27", 00:07:08.597 "assigned_rate_limits": { 00:07:08.597 "rw_ios_per_sec": 0, 00:07:08.597 "rw_mbytes_per_sec": 0, 00:07:08.597 "r_mbytes_per_sec": 0, 00:07:08.597 "w_mbytes_per_sec": 0 00:07:08.597 }, 00:07:08.597 "claimed": false, 00:07:08.597 "zoned": false, 00:07:08.597 "supported_io_types": { 00:07:08.597 "read": true, 00:07:08.597 "write": true, 00:07:08.597 "unmap": true, 00:07:08.597 "flush": true, 00:07:08.597 "reset": true, 00:07:08.597 "nvme_admin": false, 00:07:08.597 "nvme_io": false, 00:07:08.597 "nvme_io_md": false, 00:07:08.597 "write_zeroes": true, 00:07:08.597 "zcopy": true, 00:07:08.597 "get_zone_info": false, 00:07:08.597 "zone_management": false, 00:07:08.597 "zone_append": false, 00:07:08.597 "compare": false, 00:07:08.597 "compare_and_write": false, 00:07:08.597 "abort": true, 00:07:08.597 "seek_hole": false, 00:07:08.597 "seek_data": false, 00:07:08.597 "copy": true, 00:07:08.597 "nvme_iov_md": false 00:07:08.597 }, 00:07:08.597 "memory_domains": [ 00:07:08.597 { 00:07:08.597 "dma_device_id": "system", 00:07:08.597 "dma_device_type": 1 00:07:08.597 }, 00:07:08.597 { 00:07:08.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.597 "dma_device_type": 2 00:07:08.597 } 00:07:08.597 ], 00:07:08.597 "driver_specific": {} 00:07:08.597 } 00:07:08.597 ]' 00:07:08.597 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:08.855 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:08.855 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:08.855 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.855 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.855 [2024-12-06 15:33:51.910437] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:08.855 [2024-12-06 15:33:51.910690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.855 [2024-12-06 15:33:51.910735] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:08.855 [2024-12-06 15:33:51.910754] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.855 [2024-12-06 15:33:51.913909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.855 [2024-12-06 15:33:51.914079] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:08.855 Passthru0 00:07:08.855 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.855 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:08.855 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.855 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.855 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.855 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:08.855 { 00:07:08.855 "name": "Malloc2", 00:07:08.855 "aliases": [ 00:07:08.855 "f613aab3-863b-4d18-8906-6b5c25ad6d27" 00:07:08.855 ], 00:07:08.855 "product_name": "Malloc disk", 00:07:08.855 "block_size": 512, 00:07:08.855 "num_blocks": 16384, 00:07:08.855 "uuid": "f613aab3-863b-4d18-8906-6b5c25ad6d27", 00:07:08.855 "assigned_rate_limits": { 00:07:08.855 "rw_ios_per_sec": 0, 00:07:08.855 "rw_mbytes_per_sec": 0, 00:07:08.855 "r_mbytes_per_sec": 0, 00:07:08.855 "w_mbytes_per_sec": 0 00:07:08.855 }, 00:07:08.855 "claimed": true, 00:07:08.855 "claim_type": "exclusive_write", 00:07:08.855 "zoned": false, 00:07:08.855 "supported_io_types": { 00:07:08.855 "read": true, 00:07:08.855 "write": true, 00:07:08.855 "unmap": true, 00:07:08.855 "flush": true, 00:07:08.855 "reset": true, 00:07:08.855 "nvme_admin": false, 00:07:08.855 "nvme_io": false, 00:07:08.855 "nvme_io_md": false, 00:07:08.855 "write_zeroes": true, 00:07:08.855 "zcopy": true, 00:07:08.855 "get_zone_info": false, 00:07:08.855 "zone_management": false, 00:07:08.855 "zone_append": false, 00:07:08.855 "compare": false, 00:07:08.855 "compare_and_write": false, 00:07:08.855 "abort": true, 00:07:08.855 "seek_hole": false, 00:07:08.855 "seek_data": false, 00:07:08.855 "copy": true, 00:07:08.855 "nvme_iov_md": false 00:07:08.855 }, 00:07:08.855 "memory_domains": [ 00:07:08.855 { 00:07:08.855 "dma_device_id": "system", 00:07:08.855 "dma_device_type": 1 00:07:08.855 }, 00:07:08.855 { 00:07:08.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.855 "dma_device_type": 2 00:07:08.855 } 00:07:08.855 ], 00:07:08.855 "driver_specific": {} 00:07:08.855 }, 00:07:08.855 { 00:07:08.855 "name": "Passthru0", 00:07:08.855 "aliases": [ 00:07:08.855 "885a7cb6-48a4-5c83-9a49-7d145b8b78c4" 00:07:08.855 ], 00:07:08.855 "product_name": "passthru", 00:07:08.855 "block_size": 512, 00:07:08.855 "num_blocks": 16384, 00:07:08.855 "uuid": "885a7cb6-48a4-5c83-9a49-7d145b8b78c4", 00:07:08.855 "assigned_rate_limits": { 00:07:08.855 "rw_ios_per_sec": 0, 00:07:08.855 "rw_mbytes_per_sec": 0, 00:07:08.855 "r_mbytes_per_sec": 0, 00:07:08.855 "w_mbytes_per_sec": 0 00:07:08.855 }, 00:07:08.855 "claimed": false, 00:07:08.855 "zoned": false, 00:07:08.855 "supported_io_types": { 00:07:08.855 "read": true, 00:07:08.855 "write": true, 00:07:08.855 "unmap": true, 00:07:08.855 "flush": true, 00:07:08.855 "reset": true, 00:07:08.855 "nvme_admin": false, 00:07:08.855 "nvme_io": false, 00:07:08.855 "nvme_io_md": false, 00:07:08.855 "write_zeroes": true, 00:07:08.855 "zcopy": true, 00:07:08.855 "get_zone_info": false, 00:07:08.855 "zone_management": false, 00:07:08.855 "zone_append": false, 00:07:08.855 "compare": false, 00:07:08.855 "compare_and_write": false, 00:07:08.855 "abort": true, 00:07:08.855 "seek_hole": false, 00:07:08.855 "seek_data": false, 00:07:08.855 "copy": true, 00:07:08.855 "nvme_iov_md": false 00:07:08.855 }, 00:07:08.855 "memory_domains": [ 00:07:08.855 { 00:07:08.855 "dma_device_id": "system", 00:07:08.855 "dma_device_type": 1 00:07:08.855 }, 00:07:08.855 { 00:07:08.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.855 "dma_device_type": 2 00:07:08.855 } 00:07:08.855 ], 00:07:08.855 "driver_specific": { 00:07:08.855 "passthru": { 00:07:08.855 "name": "Passthru0", 00:07:08.855 "base_bdev_name": "Malloc2" 00:07:08.855 } 00:07:08.855 } 00:07:08.855 } 00:07:08.855 ]' 00:07:08.855 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:08.855 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:08.855 15:33:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:08.855 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.855 15:33:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.855 15:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.855 15:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:08.855 15:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.855 15:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.855 15:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.855 15:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:08.855 15:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.855 15:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.855 15:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.855 15:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:08.855 15:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:08.855 ************************************ 00:07:08.855 END TEST rpc_daemon_integrity 00:07:08.855 ************************************ 00:07:08.855 15:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:08.855 00:07:08.855 real 0m0.370s 00:07:08.855 user 0m0.187s 00:07:08.855 sys 0m0.072s 00:07:08.855 15:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.855 15:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:09.113 15:33:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:09.113 15:33:52 rpc -- rpc/rpc.sh@84 -- # killprocess 56763 00:07:09.113 15:33:52 rpc -- common/autotest_common.sh@954 -- # '[' -z 56763 ']' 00:07:09.113 15:33:52 rpc -- common/autotest_common.sh@958 -- # kill -0 56763 00:07:09.113 15:33:52 rpc -- common/autotest_common.sh@959 -- # uname 00:07:09.113 15:33:52 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.113 15:33:52 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56763 00:07:09.113 killing process with pid 56763 00:07:09.113 15:33:52 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.113 15:33:52 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.113 15:33:52 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56763' 00:07:09.113 15:33:52 rpc -- common/autotest_common.sh@973 -- # kill 56763 00:07:09.113 15:33:52 rpc -- common/autotest_common.sh@978 -- # wait 56763 00:07:11.646 00:07:11.646 real 0m5.898s 00:07:11.646 user 0m6.201s 00:07:11.646 sys 0m1.211s 00:07:11.646 15:33:54 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.646 ************************************ 00:07:11.646 END TEST rpc 00:07:11.646 ************************************ 00:07:11.646 15:33:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.646 15:33:54 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:11.646 15:33:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.646 15:33:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.646 15:33:54 -- common/autotest_common.sh@10 -- # set +x 00:07:11.905 ************************************ 00:07:11.905 START TEST skip_rpc 00:07:11.905 ************************************ 00:07:11.905 15:33:54 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:11.905 * Looking for test storage... 00:07:11.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:11.905 15:33:55 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:11.905 15:33:55 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:11.905 15:33:55 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:11.905 15:33:55 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.905 15:33:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:11.905 15:33:55 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.905 15:33:55 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:11.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.905 --rc genhtml_branch_coverage=1 00:07:11.905 --rc genhtml_function_coverage=1 00:07:11.905 --rc genhtml_legend=1 00:07:11.905 --rc geninfo_all_blocks=1 00:07:11.905 --rc geninfo_unexecuted_blocks=1 00:07:11.905 00:07:11.905 ' 00:07:11.905 15:33:55 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:11.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.905 --rc genhtml_branch_coverage=1 00:07:11.905 --rc genhtml_function_coverage=1 00:07:11.905 --rc genhtml_legend=1 00:07:11.905 --rc geninfo_all_blocks=1 00:07:11.905 --rc geninfo_unexecuted_blocks=1 00:07:11.905 00:07:11.905 ' 00:07:11.905 15:33:55 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:11.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.905 --rc genhtml_branch_coverage=1 00:07:11.905 --rc genhtml_function_coverage=1 00:07:11.905 --rc genhtml_legend=1 00:07:11.905 --rc geninfo_all_blocks=1 00:07:11.905 --rc geninfo_unexecuted_blocks=1 00:07:11.905 00:07:11.905 ' 00:07:11.905 15:33:55 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:11.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.905 --rc genhtml_branch_coverage=1 00:07:11.905 --rc genhtml_function_coverage=1 00:07:11.905 --rc genhtml_legend=1 00:07:11.905 --rc geninfo_all_blocks=1 00:07:11.905 --rc geninfo_unexecuted_blocks=1 00:07:11.905 00:07:11.905 ' 00:07:11.905 15:33:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:11.905 15:33:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:11.905 15:33:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:11.905 15:33:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.905 15:33:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.905 15:33:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.163 ************************************ 00:07:12.163 START TEST skip_rpc 00:07:12.163 ************************************ 00:07:12.163 15:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:12.163 15:33:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56999 00:07:12.163 15:33:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:12.163 15:33:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:12.163 15:33:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:12.163 [2024-12-06 15:33:55.324526] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:07:12.163 [2024-12-06 15:33:55.324679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56999 ] 00:07:12.421 [2024-12-06 15:33:55.501439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.421 [2024-12-06 15:33:55.644726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56999 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56999 ']' 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56999 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.689 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56999 00:07:17.689 killing process with pid 56999 00:07:17.690 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.690 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.690 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56999' 00:07:17.690 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56999 00:07:17.690 15:34:00 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56999 00:07:20.221 00:07:20.221 real 0m7.770s 00:07:20.221 user 0m7.101s 00:07:20.221 sys 0m0.592s 00:07:20.221 ************************************ 00:07:20.221 END TEST skip_rpc 00:07:20.221 ************************************ 00:07:20.221 15:34:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.221 15:34:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.221 15:34:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:20.221 15:34:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.221 15:34:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.221 15:34:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.221 ************************************ 00:07:20.221 START TEST skip_rpc_with_json 00:07:20.221 ************************************ 00:07:20.221 15:34:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:20.221 15:34:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:20.221 15:34:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57108 00:07:20.221 15:34:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.221 15:34:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:20.221 15:34:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57108 00:07:20.221 15:34:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57108 ']' 00:07:20.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.221 15:34:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.221 15:34:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.221 15:34:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.221 15:34:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.221 15:34:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:20.221 [2024-12-06 15:34:03.174086] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:07:20.221 [2024-12-06 15:34:03.174242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57108 ] 00:07:20.221 [2024-12-06 15:34:03.362966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.221 [2024-12-06 15:34:03.512731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.653 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.653 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:21.653 15:34:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:21.653 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.653 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:21.653 [2024-12-06 15:34:04.600149] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:21.653 request: 00:07:21.653 { 00:07:21.653 "trtype": "tcp", 00:07:21.653 "method": "nvmf_get_transports", 00:07:21.653 "req_id": 1 00:07:21.653 } 00:07:21.653 Got JSON-RPC error response 00:07:21.653 response: 00:07:21.653 { 00:07:21.653 "code": -19, 00:07:21.653 "message": "No such device" 00:07:21.653 } 00:07:21.653 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:21.653 15:34:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:21.653 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.653 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:21.653 [2024-12-06 15:34:04.612230] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.653 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.653 15:34:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:21.653 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.653 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:21.653 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.653 15:34:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:21.653 { 00:07:21.653 "subsystems": [ 00:07:21.653 { 00:07:21.653 "subsystem": "fsdev", 00:07:21.653 "config": [ 00:07:21.653 { 00:07:21.653 "method": "fsdev_set_opts", 00:07:21.653 "params": { 00:07:21.653 "fsdev_io_pool_size": 65535, 00:07:21.653 "fsdev_io_cache_size": 256 00:07:21.653 } 00:07:21.653 } 00:07:21.653 ] 00:07:21.653 }, 00:07:21.653 { 00:07:21.653 "subsystem": "keyring", 00:07:21.653 "config": [] 00:07:21.653 }, 00:07:21.653 { 00:07:21.653 "subsystem": "iobuf", 00:07:21.653 "config": [ 00:07:21.653 { 00:07:21.653 "method": "iobuf_set_options", 00:07:21.653 "params": { 00:07:21.653 "small_pool_count": 8192, 00:07:21.653 "large_pool_count": 1024, 00:07:21.653 "small_bufsize": 8192, 00:07:21.653 "large_bufsize": 135168, 00:07:21.653 "enable_numa": false 00:07:21.653 } 00:07:21.653 } 00:07:21.653 ] 00:07:21.653 }, 00:07:21.653 { 00:07:21.653 "subsystem": "sock", 00:07:21.653 "config": [ 00:07:21.653 { 00:07:21.653 "method": "sock_set_default_impl", 00:07:21.653 "params": { 00:07:21.653 "impl_name": "posix" 00:07:21.653 } 00:07:21.653 }, 00:07:21.653 { 00:07:21.653 "method": "sock_impl_set_options", 00:07:21.653 "params": { 00:07:21.653 "impl_name": "ssl", 00:07:21.653 "recv_buf_size": 4096, 00:07:21.653 "send_buf_size": 4096, 00:07:21.653 "enable_recv_pipe": true, 00:07:21.653 "enable_quickack": false, 00:07:21.653 "enable_placement_id": 0, 00:07:21.654 "enable_zerocopy_send_server": true, 00:07:21.654 "enable_zerocopy_send_client": false, 00:07:21.654 "zerocopy_threshold": 0, 00:07:21.654 "tls_version": 0, 00:07:21.654 "enable_ktls": false 00:07:21.654 } 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "method": "sock_impl_set_options", 00:07:21.654 "params": { 00:07:21.654 "impl_name": "posix", 00:07:21.654 "recv_buf_size": 2097152, 00:07:21.654 "send_buf_size": 2097152, 00:07:21.654 "enable_recv_pipe": true, 00:07:21.654 "enable_quickack": false, 00:07:21.654 "enable_placement_id": 0, 00:07:21.654 "enable_zerocopy_send_server": true, 00:07:21.654 "enable_zerocopy_send_client": false, 00:07:21.654 "zerocopy_threshold": 0, 00:07:21.654 "tls_version": 0, 00:07:21.654 "enable_ktls": false 00:07:21.654 } 00:07:21.654 } 00:07:21.654 ] 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "subsystem": "vmd", 00:07:21.654 "config": [] 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "subsystem": "accel", 00:07:21.654 "config": [ 00:07:21.654 { 00:07:21.654 "method": "accel_set_options", 00:07:21.654 "params": { 00:07:21.654 "small_cache_size": 128, 00:07:21.654 "large_cache_size": 16, 00:07:21.654 "task_count": 2048, 00:07:21.654 "sequence_count": 2048, 00:07:21.654 "buf_count": 2048 00:07:21.654 } 00:07:21.654 } 00:07:21.654 ] 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "subsystem": "bdev", 00:07:21.654 "config": [ 00:07:21.654 { 00:07:21.654 "method": "bdev_set_options", 00:07:21.654 "params": { 00:07:21.654 "bdev_io_pool_size": 65535, 00:07:21.654 "bdev_io_cache_size": 256, 00:07:21.654 "bdev_auto_examine": true, 00:07:21.654 "iobuf_small_cache_size": 128, 00:07:21.654 "iobuf_large_cache_size": 16 00:07:21.654 } 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "method": "bdev_raid_set_options", 00:07:21.654 "params": { 00:07:21.654 "process_window_size_kb": 1024, 00:07:21.654 "process_max_bandwidth_mb_sec": 0 00:07:21.654 } 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "method": "bdev_iscsi_set_options", 00:07:21.654 "params": { 00:07:21.654 "timeout_sec": 30 00:07:21.654 } 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "method": "bdev_nvme_set_options", 00:07:21.654 "params": { 00:07:21.654 "action_on_timeout": "none", 00:07:21.654 "timeout_us": 0, 00:07:21.654 "timeout_admin_us": 0, 00:07:21.654 "keep_alive_timeout_ms": 10000, 00:07:21.654 "arbitration_burst": 0, 00:07:21.654 "low_priority_weight": 0, 00:07:21.654 "medium_priority_weight": 0, 00:07:21.654 "high_priority_weight": 0, 00:07:21.654 "nvme_adminq_poll_period_us": 10000, 00:07:21.654 "nvme_ioq_poll_period_us": 0, 00:07:21.654 "io_queue_requests": 0, 00:07:21.654 "delay_cmd_submit": true, 00:07:21.654 "transport_retry_count": 4, 00:07:21.654 "bdev_retry_count": 3, 00:07:21.654 "transport_ack_timeout": 0, 00:07:21.654 "ctrlr_loss_timeout_sec": 0, 00:07:21.654 "reconnect_delay_sec": 0, 00:07:21.654 "fast_io_fail_timeout_sec": 0, 00:07:21.654 "disable_auto_failback": false, 00:07:21.654 "generate_uuids": false, 00:07:21.654 "transport_tos": 0, 00:07:21.654 "nvme_error_stat": false, 00:07:21.654 "rdma_srq_size": 0, 00:07:21.654 "io_path_stat": false, 00:07:21.654 "allow_accel_sequence": false, 00:07:21.654 "rdma_max_cq_size": 0, 00:07:21.654 "rdma_cm_event_timeout_ms": 0, 00:07:21.654 "dhchap_digests": [ 00:07:21.654 "sha256", 00:07:21.654 "sha384", 00:07:21.654 "sha512" 00:07:21.654 ], 00:07:21.654 "dhchap_dhgroups": [ 00:07:21.654 "null", 00:07:21.654 "ffdhe2048", 00:07:21.654 "ffdhe3072", 00:07:21.654 "ffdhe4096", 00:07:21.654 "ffdhe6144", 00:07:21.654 "ffdhe8192" 00:07:21.654 ], 00:07:21.654 "rdma_umr_per_io": false 00:07:21.654 } 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "method": "bdev_nvme_set_hotplug", 00:07:21.654 "params": { 00:07:21.654 "period_us": 100000, 00:07:21.654 "enable": false 00:07:21.654 } 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "method": "bdev_wait_for_examine" 00:07:21.654 } 00:07:21.654 ] 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "subsystem": "scsi", 00:07:21.654 "config": null 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "subsystem": "scheduler", 00:07:21.654 "config": [ 00:07:21.654 { 00:07:21.654 "method": "framework_set_scheduler", 00:07:21.654 "params": { 00:07:21.654 "name": "static" 00:07:21.654 } 00:07:21.654 } 00:07:21.654 ] 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "subsystem": "vhost_scsi", 00:07:21.654 "config": [] 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "subsystem": "vhost_blk", 00:07:21.654 "config": [] 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "subsystem": "ublk", 00:07:21.654 "config": [] 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "subsystem": "nbd", 00:07:21.654 "config": [] 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "subsystem": "nvmf", 00:07:21.654 "config": [ 00:07:21.654 { 00:07:21.654 "method": "nvmf_set_config", 00:07:21.654 "params": { 00:07:21.654 "discovery_filter": "match_any", 00:07:21.654 "admin_cmd_passthru": { 00:07:21.654 "identify_ctrlr": false 00:07:21.654 }, 00:07:21.654 "dhchap_digests": [ 00:07:21.654 "sha256", 00:07:21.654 "sha384", 00:07:21.654 "sha512" 00:07:21.654 ], 00:07:21.654 "dhchap_dhgroups": [ 00:07:21.654 "null", 00:07:21.654 "ffdhe2048", 00:07:21.654 "ffdhe3072", 00:07:21.654 "ffdhe4096", 00:07:21.654 "ffdhe6144", 00:07:21.654 "ffdhe8192" 00:07:21.654 ] 00:07:21.654 } 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "method": "nvmf_set_max_subsystems", 00:07:21.654 "params": { 00:07:21.654 "max_subsystems": 1024 00:07:21.654 } 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "method": "nvmf_set_crdt", 00:07:21.654 "params": { 00:07:21.654 "crdt1": 0, 00:07:21.654 "crdt2": 0, 00:07:21.654 "crdt3": 0 00:07:21.654 } 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "method": "nvmf_create_transport", 00:07:21.654 "params": { 00:07:21.654 "trtype": "TCP", 00:07:21.654 "max_queue_depth": 128, 00:07:21.654 "max_io_qpairs_per_ctrlr": 127, 00:07:21.654 "in_capsule_data_size": 4096, 00:07:21.654 "max_io_size": 131072, 00:07:21.654 "io_unit_size": 131072, 00:07:21.654 "max_aq_depth": 128, 00:07:21.654 "num_shared_buffers": 511, 00:07:21.654 "buf_cache_size": 4294967295, 00:07:21.654 "dif_insert_or_strip": false, 00:07:21.654 "zcopy": false, 00:07:21.654 "c2h_success": true, 00:07:21.654 "sock_priority": 0, 00:07:21.654 "abort_timeout_sec": 1, 00:07:21.654 "ack_timeout": 0, 00:07:21.654 "data_wr_pool_size": 0 00:07:21.654 } 00:07:21.654 } 00:07:21.654 ] 00:07:21.654 }, 00:07:21.654 { 00:07:21.654 "subsystem": "iscsi", 00:07:21.654 "config": [ 00:07:21.654 { 00:07:21.654 "method": "iscsi_set_options", 00:07:21.654 "params": { 00:07:21.654 "node_base": "iqn.2016-06.io.spdk", 00:07:21.654 "max_sessions": 128, 00:07:21.654 "max_connections_per_session": 2, 00:07:21.654 "max_queue_depth": 64, 00:07:21.654 "default_time2wait": 2, 00:07:21.654 "default_time2retain": 20, 00:07:21.654 "first_burst_length": 8192, 00:07:21.654 "immediate_data": true, 00:07:21.654 "allow_duplicated_isid": false, 00:07:21.654 "error_recovery_level": 0, 00:07:21.654 "nop_timeout": 60, 00:07:21.654 "nop_in_interval": 30, 00:07:21.654 "disable_chap": false, 00:07:21.654 "require_chap": false, 00:07:21.654 "mutual_chap": false, 00:07:21.654 "chap_group": 0, 00:07:21.654 "max_large_datain_per_connection": 64, 00:07:21.654 "max_r2t_per_connection": 4, 00:07:21.654 "pdu_pool_size": 36864, 00:07:21.654 "immediate_data_pool_size": 16384, 00:07:21.654 "data_out_pool_size": 2048 00:07:21.654 } 00:07:21.654 } 00:07:21.654 ] 00:07:21.654 } 00:07:21.654 ] 00:07:21.654 } 00:07:21.654 15:34:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:21.654 15:34:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57108 00:07:21.654 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57108 ']' 00:07:21.654 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57108 00:07:21.654 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:21.654 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.654 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57108 00:07:21.654 killing process with pid 57108 00:07:21.654 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.654 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.654 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57108' 00:07:21.654 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57108 00:07:21.654 15:34:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57108 00:07:24.930 15:34:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57170 00:07:24.930 15:34:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:24.930 15:34:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:30.218 15:34:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57170 00:07:30.218 15:34:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57170 ']' 00:07:30.218 15:34:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57170 00:07:30.218 15:34:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:30.218 15:34:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.218 15:34:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57170 00:07:30.218 killing process with pid 57170 00:07:30.218 15:34:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.218 15:34:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.218 15:34:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57170' 00:07:30.218 15:34:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57170 00:07:30.218 15:34:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57170 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:32.119 00:07:32.119 real 0m12.255s 00:07:32.119 user 0m11.296s 00:07:32.119 sys 0m1.278s 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.119 ************************************ 00:07:32.119 END TEST skip_rpc_with_json 00:07:32.119 ************************************ 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:32.119 15:34:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:32.119 15:34:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.119 15:34:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.119 15:34:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.119 ************************************ 00:07:32.119 START TEST skip_rpc_with_delay 00:07:32.119 ************************************ 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:32.119 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:32.376 [2024-12-06 15:34:15.508583] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:32.376 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:32.376 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.376 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:32.376 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.376 00:07:32.376 real 0m0.221s 00:07:32.376 user 0m0.103s 00:07:32.376 sys 0m0.115s 00:07:32.376 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.376 15:34:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:32.376 ************************************ 00:07:32.376 END TEST skip_rpc_with_delay 00:07:32.376 ************************************ 00:07:32.376 15:34:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:32.376 15:34:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:32.377 15:34:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:32.377 15:34:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.377 15:34:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.377 15:34:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.633 ************************************ 00:07:32.633 START TEST exit_on_failed_rpc_init 00:07:32.633 ************************************ 00:07:32.633 15:34:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:32.633 15:34:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57309 00:07:32.633 15:34:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:32.633 15:34:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57309 00:07:32.633 15:34:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57309 ']' 00:07:32.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.633 15:34:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.633 15:34:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.634 15:34:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.634 15:34:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.634 15:34:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:32.634 [2024-12-06 15:34:15.810573] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:07:32.634 [2024-12-06 15:34:15.810732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57309 ] 00:07:32.891 [2024-12-06 15:34:15.998478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.891 [2024-12-06 15:34:16.140365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.264 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.264 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:34.264 15:34:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:34.264 15:34:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:34.264 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:34.264 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:34.264 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:34.264 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.264 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:34.264 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.264 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:34.264 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.264 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:34.264 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:34.264 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:34.264 [2024-12-06 15:34:17.308035] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:07:34.264 [2024-12-06 15:34:17.308389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57327 ] 00:07:34.264 [2024-12-06 15:34:17.493712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.523 [2024-12-06 15:34:17.700641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.523 [2024-12-06 15:34:17.701059] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:34.523 [2024-12-06 15:34:17.701105] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:34.523 [2024-12-06 15:34:17.701134] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.781 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:34.781 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.781 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:34.781 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:34.781 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:34.781 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.781 15:34:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:34.781 15:34:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57309 00:07:34.781 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57309 ']' 00:07:34.781 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57309 00:07:34.781 15:34:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:34.781 15:34:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.781 15:34:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57309 00:07:34.781 killing process with pid 57309 00:07:34.781 15:34:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.781 15:34:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.781 15:34:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57309' 00:07:34.781 15:34:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57309 00:07:34.781 15:34:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57309 00:07:38.065 00:07:38.065 real 0m5.071s 00:07:38.065 user 0m5.395s 00:07:38.065 sys 0m0.833s 00:07:38.065 15:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.065 ************************************ 00:07:38.065 END TEST exit_on_failed_rpc_init 00:07:38.065 ************************************ 00:07:38.065 15:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:38.065 15:34:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:38.065 00:07:38.065 real 0m25.863s 00:07:38.065 user 0m24.118s 00:07:38.065 sys 0m3.146s 00:07:38.065 15:34:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.065 ************************************ 00:07:38.065 15:34:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.065 END TEST skip_rpc 00:07:38.065 ************************************ 00:07:38.065 15:34:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:38.065 15:34:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.065 15:34:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.065 15:34:20 -- common/autotest_common.sh@10 -- # set +x 00:07:38.065 ************************************ 00:07:38.065 START TEST rpc_client 00:07:38.065 ************************************ 00:07:38.065 15:34:20 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:38.065 * Looking for test storage... 00:07:38.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:38.065 15:34:20 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:38.065 15:34:20 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:07:38.065 15:34:20 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:38.065 15:34:21 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.065 15:34:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:38.065 15:34:21 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.065 15:34:21 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:38.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.065 --rc genhtml_branch_coverage=1 00:07:38.065 --rc genhtml_function_coverage=1 00:07:38.065 --rc genhtml_legend=1 00:07:38.065 --rc geninfo_all_blocks=1 00:07:38.065 --rc geninfo_unexecuted_blocks=1 00:07:38.065 00:07:38.065 ' 00:07:38.065 15:34:21 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:38.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.065 --rc genhtml_branch_coverage=1 00:07:38.065 --rc genhtml_function_coverage=1 00:07:38.065 --rc genhtml_legend=1 00:07:38.065 --rc geninfo_all_blocks=1 00:07:38.065 --rc geninfo_unexecuted_blocks=1 00:07:38.065 00:07:38.065 ' 00:07:38.065 15:34:21 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:38.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.065 --rc genhtml_branch_coverage=1 00:07:38.065 --rc genhtml_function_coverage=1 00:07:38.065 --rc genhtml_legend=1 00:07:38.065 --rc geninfo_all_blocks=1 00:07:38.065 --rc geninfo_unexecuted_blocks=1 00:07:38.065 00:07:38.065 ' 00:07:38.065 15:34:21 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:38.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.065 --rc genhtml_branch_coverage=1 00:07:38.065 --rc genhtml_function_coverage=1 00:07:38.065 --rc genhtml_legend=1 00:07:38.065 --rc geninfo_all_blocks=1 00:07:38.065 --rc geninfo_unexecuted_blocks=1 00:07:38.065 00:07:38.065 ' 00:07:38.066 15:34:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:38.066 OK 00:07:38.066 15:34:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:38.066 00:07:38.066 real 0m0.329s 00:07:38.066 user 0m0.175s 00:07:38.066 sys 0m0.170s 00:07:38.066 15:34:21 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.066 15:34:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:38.066 ************************************ 00:07:38.066 END TEST rpc_client 00:07:38.066 ************************************ 00:07:38.066 15:34:21 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:38.066 15:34:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.066 15:34:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.066 15:34:21 -- common/autotest_common.sh@10 -- # set +x 00:07:38.066 ************************************ 00:07:38.066 START TEST json_config 00:07:38.066 ************************************ 00:07:38.066 15:34:21 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:38.066 15:34:21 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:38.066 15:34:21 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:07:38.066 15:34:21 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:38.326 15:34:21 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:38.326 15:34:21 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.326 15:34:21 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.326 15:34:21 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.326 15:34:21 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.326 15:34:21 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.326 15:34:21 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.326 15:34:21 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.326 15:34:21 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.326 15:34:21 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.326 15:34:21 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.326 15:34:21 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.326 15:34:21 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:38.326 15:34:21 json_config -- scripts/common.sh@345 -- # : 1 00:07:38.326 15:34:21 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.326 15:34:21 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.326 15:34:21 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:38.326 15:34:21 json_config -- scripts/common.sh@353 -- # local d=1 00:07:38.326 15:34:21 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.326 15:34:21 json_config -- scripts/common.sh@355 -- # echo 1 00:07:38.326 15:34:21 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.326 15:34:21 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:38.326 15:34:21 json_config -- scripts/common.sh@353 -- # local d=2 00:07:38.326 15:34:21 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.326 15:34:21 json_config -- scripts/common.sh@355 -- # echo 2 00:07:38.326 15:34:21 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.326 15:34:21 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.326 15:34:21 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.326 15:34:21 json_config -- scripts/common.sh@368 -- # return 0 00:07:38.326 15:34:21 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.326 15:34:21 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:38.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.326 --rc genhtml_branch_coverage=1 00:07:38.326 --rc genhtml_function_coverage=1 00:07:38.326 --rc genhtml_legend=1 00:07:38.326 --rc geninfo_all_blocks=1 00:07:38.326 --rc geninfo_unexecuted_blocks=1 00:07:38.326 00:07:38.326 ' 00:07:38.326 15:34:21 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:38.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.326 --rc genhtml_branch_coverage=1 00:07:38.326 --rc genhtml_function_coverage=1 00:07:38.326 --rc genhtml_legend=1 00:07:38.326 --rc geninfo_all_blocks=1 00:07:38.326 --rc geninfo_unexecuted_blocks=1 00:07:38.326 00:07:38.326 ' 00:07:38.326 15:34:21 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:38.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.326 --rc genhtml_branch_coverage=1 00:07:38.326 --rc genhtml_function_coverage=1 00:07:38.326 --rc genhtml_legend=1 00:07:38.326 --rc geninfo_all_blocks=1 00:07:38.326 --rc geninfo_unexecuted_blocks=1 00:07:38.326 00:07:38.326 ' 00:07:38.326 15:34:21 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:38.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.326 --rc genhtml_branch_coverage=1 00:07:38.326 --rc genhtml_function_coverage=1 00:07:38.326 --rc genhtml_legend=1 00:07:38.326 --rc geninfo_all_blocks=1 00:07:38.326 --rc geninfo_unexecuted_blocks=1 00:07:38.326 00:07:38.326 ' 00:07:38.326 15:34:21 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:792076c3-050c-4de8-8516-9038b1df6f80 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=792076c3-050c-4de8-8516-9038b1df6f80 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:38.326 15:34:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.326 15:34:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.326 15:34:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.326 15:34:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.326 15:34:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.326 15:34:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.326 15:34:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.326 15:34:21 json_config -- paths/export.sh@5 -- # export PATH 00:07:38.326 15:34:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@51 -- # : 0 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.326 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.326 15:34:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.326 15:34:21 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:38.326 15:34:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:38.326 15:34:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:38.326 15:34:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:38.326 15:34:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:38.326 15:34:21 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:38.326 WARNING: No tests are enabled so not running JSON configuration tests 00:07:38.326 15:34:21 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:38.326 00:07:38.326 real 0m0.234s 00:07:38.326 user 0m0.140s 00:07:38.327 sys 0m0.095s 00:07:38.327 15:34:21 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.327 15:34:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.327 ************************************ 00:07:38.327 END TEST json_config 00:07:38.327 ************************************ 00:07:38.327 15:34:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:38.327 15:34:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.327 15:34:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.327 15:34:21 -- common/autotest_common.sh@10 -- # set +x 00:07:38.327 ************************************ 00:07:38.327 START TEST json_config_extra_key 00:07:38.327 ************************************ 00:07:38.327 15:34:21 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:38.586 15:34:21 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:38.586 15:34:21 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:07:38.586 15:34:21 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:38.586 15:34:21 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:38.586 15:34:21 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.586 15:34:21 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.586 15:34:21 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.586 15:34:21 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.586 15:34:21 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.586 15:34:21 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.586 15:34:21 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.586 15:34:21 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:38.587 15:34:21 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.587 15:34:21 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:38.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.587 --rc genhtml_branch_coverage=1 00:07:38.587 --rc genhtml_function_coverage=1 00:07:38.587 --rc genhtml_legend=1 00:07:38.587 --rc geninfo_all_blocks=1 00:07:38.587 --rc geninfo_unexecuted_blocks=1 00:07:38.587 00:07:38.587 ' 00:07:38.587 15:34:21 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:38.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.587 --rc genhtml_branch_coverage=1 00:07:38.587 --rc genhtml_function_coverage=1 00:07:38.587 --rc genhtml_legend=1 00:07:38.587 --rc geninfo_all_blocks=1 00:07:38.587 --rc geninfo_unexecuted_blocks=1 00:07:38.587 00:07:38.587 ' 00:07:38.587 15:34:21 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:38.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.587 --rc genhtml_branch_coverage=1 00:07:38.587 --rc genhtml_function_coverage=1 00:07:38.587 --rc genhtml_legend=1 00:07:38.587 --rc geninfo_all_blocks=1 00:07:38.587 --rc geninfo_unexecuted_blocks=1 00:07:38.587 00:07:38.587 ' 00:07:38.587 15:34:21 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:38.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.587 --rc genhtml_branch_coverage=1 00:07:38.587 --rc genhtml_function_coverage=1 00:07:38.587 --rc genhtml_legend=1 00:07:38.587 --rc geninfo_all_blocks=1 00:07:38.587 --rc geninfo_unexecuted_blocks=1 00:07:38.587 00:07:38.587 ' 00:07:38.587 15:34:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:792076c3-050c-4de8-8516-9038b1df6f80 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=792076c3-050c-4de8-8516-9038b1df6f80 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.587 15:34:21 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.587 15:34:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.587 15:34:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.587 15:34:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.587 15:34:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:38.587 15:34:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.587 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.587 15:34:21 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.587 15:34:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:38.587 15:34:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:38.587 15:34:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:38.587 15:34:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:38.587 15:34:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:38.587 15:34:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:38.587 15:34:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:38.587 15:34:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:38.587 15:34:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:38.587 15:34:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:38.587 15:34:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:38.587 INFO: launching applications... 00:07:38.587 15:34:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:38.587 15:34:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:38.587 15:34:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:38.587 15:34:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:38.587 15:34:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:38.587 15:34:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:38.587 15:34:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:38.587 15:34:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:38.587 15:34:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57543 00:07:38.587 15:34:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:38.587 Waiting for target to run... 00:07:38.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:38.587 15:34:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57543 /var/tmp/spdk_tgt.sock 00:07:38.587 15:34:21 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:38.587 15:34:21 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57543 ']' 00:07:38.588 15:34:21 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:38.588 15:34:21 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.588 15:34:21 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:38.588 15:34:21 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.588 15:34:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:38.847 [2024-12-06 15:34:21.921899] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:07:38.848 [2024-12-06 15:34:21.922053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57543 ] 00:07:39.416 [2024-12-06 15:34:22.493331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.416 [2024-12-06 15:34:22.611304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.353 00:07:40.353 INFO: shutting down applications... 00:07:40.353 15:34:23 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.353 15:34:23 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:40.353 15:34:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:40.353 15:34:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:40.353 15:34:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:40.353 15:34:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:40.353 15:34:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:40.353 15:34:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57543 ]] 00:07:40.353 15:34:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57543 00:07:40.353 15:34:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:40.353 15:34:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:40.353 15:34:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57543 00:07:40.353 15:34:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:40.612 15:34:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:40.612 15:34:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:40.612 15:34:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57543 00:07:40.612 15:34:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:41.179 15:34:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:41.179 15:34:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:41.179 15:34:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57543 00:07:41.179 15:34:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:41.746 15:34:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:41.746 15:34:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:41.746 15:34:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57543 00:07:41.746 15:34:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:42.318 15:34:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:42.318 15:34:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:42.318 15:34:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57543 00:07:42.318 15:34:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:42.911 15:34:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:42.911 15:34:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:42.911 15:34:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57543 00:07:42.911 15:34:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:43.179 15:34:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:43.179 15:34:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:43.179 15:34:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57543 00:07:43.179 15:34:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:43.745 15:34:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:43.745 15:34:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:43.745 SPDK target shutdown done 00:07:43.745 15:34:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57543 00:07:43.745 15:34:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:43.745 15:34:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:43.745 15:34:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:43.745 15:34:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:43.745 15:34:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:43.745 Success 00:07:43.745 00:07:43.745 real 0m5.343s 00:07:43.745 user 0m4.389s 00:07:43.745 sys 0m0.849s 00:07:43.745 15:34:26 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.745 15:34:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:43.745 ************************************ 00:07:43.745 END TEST json_config_extra_key 00:07:43.745 ************************************ 00:07:43.745 15:34:26 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:43.745 15:34:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.745 15:34:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.745 15:34:26 -- common/autotest_common.sh@10 -- # set +x 00:07:43.745 ************************************ 00:07:43.745 START TEST alias_rpc 00:07:43.745 ************************************ 00:07:43.745 15:34:26 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:44.003 * Looking for test storage... 00:07:44.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:44.003 15:34:27 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:44.003 15:34:27 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:44.003 15:34:27 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:44.003 15:34:27 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.003 15:34:27 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:44.003 15:34:27 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.003 15:34:27 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:44.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.003 --rc genhtml_branch_coverage=1 00:07:44.003 --rc genhtml_function_coverage=1 00:07:44.003 --rc genhtml_legend=1 00:07:44.003 --rc geninfo_all_blocks=1 00:07:44.003 --rc geninfo_unexecuted_blocks=1 00:07:44.003 00:07:44.003 ' 00:07:44.003 15:34:27 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:44.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.003 --rc genhtml_branch_coverage=1 00:07:44.003 --rc genhtml_function_coverage=1 00:07:44.003 --rc genhtml_legend=1 00:07:44.003 --rc geninfo_all_blocks=1 00:07:44.003 --rc geninfo_unexecuted_blocks=1 00:07:44.003 00:07:44.003 ' 00:07:44.003 15:34:27 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:44.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.003 --rc genhtml_branch_coverage=1 00:07:44.003 --rc genhtml_function_coverage=1 00:07:44.003 --rc genhtml_legend=1 00:07:44.003 --rc geninfo_all_blocks=1 00:07:44.003 --rc geninfo_unexecuted_blocks=1 00:07:44.003 00:07:44.003 ' 00:07:44.003 15:34:27 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:44.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.003 --rc genhtml_branch_coverage=1 00:07:44.003 --rc genhtml_function_coverage=1 00:07:44.003 --rc genhtml_legend=1 00:07:44.003 --rc geninfo_all_blocks=1 00:07:44.003 --rc geninfo_unexecuted_blocks=1 00:07:44.003 00:07:44.003 ' 00:07:44.004 15:34:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:44.004 15:34:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57666 00:07:44.004 15:34:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:44.004 15:34:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57666 00:07:44.004 15:34:27 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57666 ']' 00:07:44.004 15:34:27 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.004 15:34:27 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.004 15:34:27 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.004 15:34:27 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.004 15:34:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.261 [2024-12-06 15:34:27.345570] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:07:44.261 [2024-12-06 15:34:27.345984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57666 ] 00:07:44.261 [2024-12-06 15:34:27.537816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.520 [2024-12-06 15:34:27.696686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.896 15:34:28 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.896 15:34:28 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:45.896 15:34:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:45.896 15:34:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57666 00:07:45.896 15:34:29 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57666 ']' 00:07:45.896 15:34:29 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57666 00:07:45.896 15:34:29 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:45.896 15:34:29 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.896 15:34:29 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57666 00:07:45.896 killing process with pid 57666 00:07:45.896 15:34:29 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.896 15:34:29 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.896 15:34:29 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57666' 00:07:45.896 15:34:29 alias_rpc -- common/autotest_common.sh@973 -- # kill 57666 00:07:45.896 15:34:29 alias_rpc -- common/autotest_common.sh@978 -- # wait 57666 00:07:49.177 00:07:49.177 real 0m4.799s 00:07:49.177 user 0m4.640s 00:07:49.177 sys 0m0.821s 00:07:49.177 15:34:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.177 ************************************ 00:07:49.177 END TEST alias_rpc 00:07:49.177 ************************************ 00:07:49.177 15:34:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.177 15:34:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:49.177 15:34:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:49.177 15:34:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.177 15:34:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.177 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:07:49.177 ************************************ 00:07:49.177 START TEST spdkcli_tcp 00:07:49.177 ************************************ 00:07:49.177 15:34:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:49.177 * Looking for test storage... 00:07:49.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:49.177 15:34:31 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:49.177 15:34:31 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:49.177 15:34:31 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:49.177 15:34:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.177 15:34:32 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:49.177 15:34:32 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.177 15:34:32 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:49.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.178 --rc genhtml_branch_coverage=1 00:07:49.178 --rc genhtml_function_coverage=1 00:07:49.178 --rc genhtml_legend=1 00:07:49.178 --rc geninfo_all_blocks=1 00:07:49.178 --rc geninfo_unexecuted_blocks=1 00:07:49.178 00:07:49.178 ' 00:07:49.178 15:34:32 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:49.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.178 --rc genhtml_branch_coverage=1 00:07:49.178 --rc genhtml_function_coverage=1 00:07:49.178 --rc genhtml_legend=1 00:07:49.178 --rc geninfo_all_blocks=1 00:07:49.178 --rc geninfo_unexecuted_blocks=1 00:07:49.178 00:07:49.178 ' 00:07:49.178 15:34:32 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:49.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.178 --rc genhtml_branch_coverage=1 00:07:49.178 --rc genhtml_function_coverage=1 00:07:49.178 --rc genhtml_legend=1 00:07:49.178 --rc geninfo_all_blocks=1 00:07:49.178 --rc geninfo_unexecuted_blocks=1 00:07:49.178 00:07:49.178 ' 00:07:49.178 15:34:32 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:49.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.178 --rc genhtml_branch_coverage=1 00:07:49.178 --rc genhtml_function_coverage=1 00:07:49.178 --rc genhtml_legend=1 00:07:49.178 --rc geninfo_all_blocks=1 00:07:49.178 --rc geninfo_unexecuted_blocks=1 00:07:49.178 00:07:49.178 ' 00:07:49.178 15:34:32 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:49.178 15:34:32 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:49.178 15:34:32 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:49.178 15:34:32 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:49.178 15:34:32 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:49.178 15:34:32 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:49.178 15:34:32 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:49.178 15:34:32 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.178 15:34:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.178 15:34:32 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57779 00:07:49.178 15:34:32 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:49.178 15:34:32 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57779 00:07:49.178 15:34:32 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57779 ']' 00:07:49.178 15:34:32 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.178 15:34:32 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.178 15:34:32 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.178 15:34:32 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.178 15:34:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.178 [2024-12-06 15:34:32.222088] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:07:49.178 [2024-12-06 15:34:32.222274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57779 ] 00:07:49.178 [2024-12-06 15:34:32.412172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:49.437 [2024-12-06 15:34:32.566571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.437 [2024-12-06 15:34:32.566610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.369 15:34:33 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.369 15:34:33 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:50.369 15:34:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:50.369 15:34:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57801 00:07:50.369 15:34:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:50.626 [ 00:07:50.626 "bdev_malloc_delete", 00:07:50.626 "bdev_malloc_create", 00:07:50.626 "bdev_null_resize", 00:07:50.626 "bdev_null_delete", 00:07:50.626 "bdev_null_create", 00:07:50.626 "bdev_nvme_cuse_unregister", 00:07:50.626 "bdev_nvme_cuse_register", 00:07:50.626 "bdev_opal_new_user", 00:07:50.626 "bdev_opal_set_lock_state", 00:07:50.626 "bdev_opal_delete", 00:07:50.626 "bdev_opal_get_info", 00:07:50.626 "bdev_opal_create", 00:07:50.626 "bdev_nvme_opal_revert", 00:07:50.626 "bdev_nvme_opal_init", 00:07:50.626 "bdev_nvme_send_cmd", 00:07:50.626 "bdev_nvme_set_keys", 00:07:50.626 "bdev_nvme_get_path_iostat", 00:07:50.626 "bdev_nvme_get_mdns_discovery_info", 00:07:50.626 "bdev_nvme_stop_mdns_discovery", 00:07:50.626 "bdev_nvme_start_mdns_discovery", 00:07:50.626 "bdev_nvme_set_multipath_policy", 00:07:50.626 "bdev_nvme_set_preferred_path", 00:07:50.626 "bdev_nvme_get_io_paths", 00:07:50.626 "bdev_nvme_remove_error_injection", 00:07:50.626 "bdev_nvme_add_error_injection", 00:07:50.626 "bdev_nvme_get_discovery_info", 00:07:50.626 "bdev_nvme_stop_discovery", 00:07:50.626 "bdev_nvme_start_discovery", 00:07:50.626 "bdev_nvme_get_controller_health_info", 00:07:50.626 "bdev_nvme_disable_controller", 00:07:50.626 "bdev_nvme_enable_controller", 00:07:50.626 "bdev_nvme_reset_controller", 00:07:50.626 "bdev_nvme_get_transport_statistics", 00:07:50.626 "bdev_nvme_apply_firmware", 00:07:50.626 "bdev_nvme_detach_controller", 00:07:50.626 "bdev_nvme_get_controllers", 00:07:50.626 "bdev_nvme_attach_controller", 00:07:50.626 "bdev_nvme_set_hotplug", 00:07:50.626 "bdev_nvme_set_options", 00:07:50.626 "bdev_passthru_delete", 00:07:50.626 "bdev_passthru_create", 00:07:50.626 "bdev_lvol_set_parent_bdev", 00:07:50.626 "bdev_lvol_set_parent", 00:07:50.626 "bdev_lvol_check_shallow_copy", 00:07:50.626 "bdev_lvol_start_shallow_copy", 00:07:50.626 "bdev_lvol_grow_lvstore", 00:07:50.626 "bdev_lvol_get_lvols", 00:07:50.626 "bdev_lvol_get_lvstores", 00:07:50.626 "bdev_lvol_delete", 00:07:50.626 "bdev_lvol_set_read_only", 00:07:50.626 "bdev_lvol_resize", 00:07:50.626 "bdev_lvol_decouple_parent", 00:07:50.626 "bdev_lvol_inflate", 00:07:50.626 "bdev_lvol_rename", 00:07:50.626 "bdev_lvol_clone_bdev", 00:07:50.626 "bdev_lvol_clone", 00:07:50.626 "bdev_lvol_snapshot", 00:07:50.626 "bdev_lvol_create", 00:07:50.626 "bdev_lvol_delete_lvstore", 00:07:50.626 "bdev_lvol_rename_lvstore", 00:07:50.626 "bdev_lvol_create_lvstore", 00:07:50.626 "bdev_raid_set_options", 00:07:50.626 "bdev_raid_remove_base_bdev", 00:07:50.626 "bdev_raid_add_base_bdev", 00:07:50.626 "bdev_raid_delete", 00:07:50.626 "bdev_raid_create", 00:07:50.626 "bdev_raid_get_bdevs", 00:07:50.626 "bdev_error_inject_error", 00:07:50.626 "bdev_error_delete", 00:07:50.626 "bdev_error_create", 00:07:50.626 "bdev_split_delete", 00:07:50.626 "bdev_split_create", 00:07:50.626 "bdev_delay_delete", 00:07:50.626 "bdev_delay_create", 00:07:50.626 "bdev_delay_update_latency", 00:07:50.626 "bdev_zone_block_delete", 00:07:50.626 "bdev_zone_block_create", 00:07:50.626 "blobfs_create", 00:07:50.626 "blobfs_detect", 00:07:50.626 "blobfs_set_cache_size", 00:07:50.626 "bdev_aio_delete", 00:07:50.626 "bdev_aio_rescan", 00:07:50.626 "bdev_aio_create", 00:07:50.626 "bdev_ftl_set_property", 00:07:50.626 "bdev_ftl_get_properties", 00:07:50.626 "bdev_ftl_get_stats", 00:07:50.626 "bdev_ftl_unmap", 00:07:50.626 "bdev_ftl_unload", 00:07:50.626 "bdev_ftl_delete", 00:07:50.626 "bdev_ftl_load", 00:07:50.626 "bdev_ftl_create", 00:07:50.626 "bdev_virtio_attach_controller", 00:07:50.626 "bdev_virtio_scsi_get_devices", 00:07:50.626 "bdev_virtio_detach_controller", 00:07:50.626 "bdev_virtio_blk_set_hotplug", 00:07:50.626 "bdev_iscsi_delete", 00:07:50.626 "bdev_iscsi_create", 00:07:50.626 "bdev_iscsi_set_options", 00:07:50.626 "accel_error_inject_error", 00:07:50.626 "ioat_scan_accel_module", 00:07:50.626 "dsa_scan_accel_module", 00:07:50.626 "iaa_scan_accel_module", 00:07:50.626 "keyring_file_remove_key", 00:07:50.626 "keyring_file_add_key", 00:07:50.626 "keyring_linux_set_options", 00:07:50.626 "fsdev_aio_delete", 00:07:50.626 "fsdev_aio_create", 00:07:50.626 "iscsi_get_histogram", 00:07:50.626 "iscsi_enable_histogram", 00:07:50.626 "iscsi_set_options", 00:07:50.626 "iscsi_get_auth_groups", 00:07:50.626 "iscsi_auth_group_remove_secret", 00:07:50.626 "iscsi_auth_group_add_secret", 00:07:50.626 "iscsi_delete_auth_group", 00:07:50.626 "iscsi_create_auth_group", 00:07:50.627 "iscsi_set_discovery_auth", 00:07:50.627 "iscsi_get_options", 00:07:50.627 "iscsi_target_node_request_logout", 00:07:50.627 "iscsi_target_node_set_redirect", 00:07:50.627 "iscsi_target_node_set_auth", 00:07:50.627 "iscsi_target_node_add_lun", 00:07:50.627 "iscsi_get_stats", 00:07:50.627 "iscsi_get_connections", 00:07:50.627 "iscsi_portal_group_set_auth", 00:07:50.627 "iscsi_start_portal_group", 00:07:50.627 "iscsi_delete_portal_group", 00:07:50.627 "iscsi_create_portal_group", 00:07:50.627 "iscsi_get_portal_groups", 00:07:50.627 "iscsi_delete_target_node", 00:07:50.627 "iscsi_target_node_remove_pg_ig_maps", 00:07:50.627 "iscsi_target_node_add_pg_ig_maps", 00:07:50.627 "iscsi_create_target_node", 00:07:50.627 "iscsi_get_target_nodes", 00:07:50.627 "iscsi_delete_initiator_group", 00:07:50.627 "iscsi_initiator_group_remove_initiators", 00:07:50.627 "iscsi_initiator_group_add_initiators", 00:07:50.627 "iscsi_create_initiator_group", 00:07:50.627 "iscsi_get_initiator_groups", 00:07:50.627 "nvmf_set_crdt", 00:07:50.627 "nvmf_set_config", 00:07:50.627 "nvmf_set_max_subsystems", 00:07:50.627 "nvmf_stop_mdns_prr", 00:07:50.627 "nvmf_publish_mdns_prr", 00:07:50.627 "nvmf_subsystem_get_listeners", 00:07:50.627 "nvmf_subsystem_get_qpairs", 00:07:50.627 "nvmf_subsystem_get_controllers", 00:07:50.627 "nvmf_get_stats", 00:07:50.627 "nvmf_get_transports", 00:07:50.627 "nvmf_create_transport", 00:07:50.627 "nvmf_get_targets", 00:07:50.627 "nvmf_delete_target", 00:07:50.627 "nvmf_create_target", 00:07:50.627 "nvmf_subsystem_allow_any_host", 00:07:50.627 "nvmf_subsystem_set_keys", 00:07:50.627 "nvmf_subsystem_remove_host", 00:07:50.627 "nvmf_subsystem_add_host", 00:07:50.627 "nvmf_ns_remove_host", 00:07:50.627 "nvmf_ns_add_host", 00:07:50.627 "nvmf_subsystem_remove_ns", 00:07:50.627 "nvmf_subsystem_set_ns_ana_group", 00:07:50.627 "nvmf_subsystem_add_ns", 00:07:50.627 "nvmf_subsystem_listener_set_ana_state", 00:07:50.627 "nvmf_discovery_get_referrals", 00:07:50.627 "nvmf_discovery_remove_referral", 00:07:50.627 "nvmf_discovery_add_referral", 00:07:50.627 "nvmf_subsystem_remove_listener", 00:07:50.627 "nvmf_subsystem_add_listener", 00:07:50.627 "nvmf_delete_subsystem", 00:07:50.627 "nvmf_create_subsystem", 00:07:50.627 "nvmf_get_subsystems", 00:07:50.627 "env_dpdk_get_mem_stats", 00:07:50.627 "nbd_get_disks", 00:07:50.627 "nbd_stop_disk", 00:07:50.627 "nbd_start_disk", 00:07:50.627 "ublk_recover_disk", 00:07:50.627 "ublk_get_disks", 00:07:50.627 "ublk_stop_disk", 00:07:50.627 "ublk_start_disk", 00:07:50.627 "ublk_destroy_target", 00:07:50.627 "ublk_create_target", 00:07:50.627 "virtio_blk_create_transport", 00:07:50.627 "virtio_blk_get_transports", 00:07:50.627 "vhost_controller_set_coalescing", 00:07:50.627 "vhost_get_controllers", 00:07:50.627 "vhost_delete_controller", 00:07:50.627 "vhost_create_blk_controller", 00:07:50.627 "vhost_scsi_controller_remove_target", 00:07:50.627 "vhost_scsi_controller_add_target", 00:07:50.627 "vhost_start_scsi_controller", 00:07:50.627 "vhost_create_scsi_controller", 00:07:50.627 "thread_set_cpumask", 00:07:50.627 "scheduler_set_options", 00:07:50.627 "framework_get_governor", 00:07:50.627 "framework_get_scheduler", 00:07:50.627 "framework_set_scheduler", 00:07:50.627 "framework_get_reactors", 00:07:50.627 "thread_get_io_channels", 00:07:50.627 "thread_get_pollers", 00:07:50.627 "thread_get_stats", 00:07:50.627 "framework_monitor_context_switch", 00:07:50.627 "spdk_kill_instance", 00:07:50.627 "log_enable_timestamps", 00:07:50.627 "log_get_flags", 00:07:50.627 "log_clear_flag", 00:07:50.627 "log_set_flag", 00:07:50.627 "log_get_level", 00:07:50.627 "log_set_level", 00:07:50.627 "log_get_print_level", 00:07:50.627 "log_set_print_level", 00:07:50.627 "framework_enable_cpumask_locks", 00:07:50.627 "framework_disable_cpumask_locks", 00:07:50.627 "framework_wait_init", 00:07:50.627 "framework_start_init", 00:07:50.627 "scsi_get_devices", 00:07:50.627 "bdev_get_histogram", 00:07:50.627 "bdev_enable_histogram", 00:07:50.627 "bdev_set_qos_limit", 00:07:50.627 "bdev_set_qd_sampling_period", 00:07:50.627 "bdev_get_bdevs", 00:07:50.627 "bdev_reset_iostat", 00:07:50.627 "bdev_get_iostat", 00:07:50.627 "bdev_examine", 00:07:50.627 "bdev_wait_for_examine", 00:07:50.627 "bdev_set_options", 00:07:50.627 "accel_get_stats", 00:07:50.627 "accel_set_options", 00:07:50.627 "accel_set_driver", 00:07:50.627 "accel_crypto_key_destroy", 00:07:50.627 "accel_crypto_keys_get", 00:07:50.627 "accel_crypto_key_create", 00:07:50.627 "accel_assign_opc", 00:07:50.627 "accel_get_module_info", 00:07:50.627 "accel_get_opc_assignments", 00:07:50.627 "vmd_rescan", 00:07:50.627 "vmd_remove_device", 00:07:50.627 "vmd_enable", 00:07:50.627 "sock_get_default_impl", 00:07:50.627 "sock_set_default_impl", 00:07:50.627 "sock_impl_set_options", 00:07:50.627 "sock_impl_get_options", 00:07:50.627 "iobuf_get_stats", 00:07:50.627 "iobuf_set_options", 00:07:50.627 "keyring_get_keys", 00:07:50.627 "framework_get_pci_devices", 00:07:50.627 "framework_get_config", 00:07:50.627 "framework_get_subsystems", 00:07:50.627 "fsdev_set_opts", 00:07:50.627 "fsdev_get_opts", 00:07:50.627 "trace_get_info", 00:07:50.627 "trace_get_tpoint_group_mask", 00:07:50.627 "trace_disable_tpoint_group", 00:07:50.627 "trace_enable_tpoint_group", 00:07:50.627 "trace_clear_tpoint_mask", 00:07:50.627 "trace_set_tpoint_mask", 00:07:50.627 "notify_get_notifications", 00:07:50.627 "notify_get_types", 00:07:50.627 "spdk_get_version", 00:07:50.627 "rpc_get_methods" 00:07:50.627 ] 00:07:50.627 15:34:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:50.627 15:34:33 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:50.627 15:34:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.884 15:34:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:50.884 15:34:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57779 00:07:50.884 15:34:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57779 ']' 00:07:50.884 15:34:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57779 00:07:50.884 15:34:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:50.884 15:34:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.884 15:34:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57779 00:07:50.884 killing process with pid 57779 00:07:50.884 15:34:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.884 15:34:33 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.884 15:34:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57779' 00:07:50.884 15:34:33 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57779 00:07:50.884 15:34:33 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57779 00:07:53.430 ************************************ 00:07:53.430 END TEST spdkcli_tcp 00:07:53.430 ************************************ 00:07:53.430 00:07:53.430 real 0m4.780s 00:07:53.430 user 0m8.374s 00:07:53.430 sys 0m0.873s 00:07:53.430 15:34:36 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.430 15:34:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.430 15:34:36 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:53.430 15:34:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.430 15:34:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.430 15:34:36 -- common/autotest_common.sh@10 -- # set +x 00:07:53.430 ************************************ 00:07:53.430 START TEST dpdk_mem_utility 00:07:53.430 ************************************ 00:07:53.430 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:53.688 * Looking for test storage... 00:07:53.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:53.688 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:53.688 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:07:53.688 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:53.688 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.688 15:34:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:53.688 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.688 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:53.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.688 --rc genhtml_branch_coverage=1 00:07:53.688 --rc genhtml_function_coverage=1 00:07:53.688 --rc genhtml_legend=1 00:07:53.688 --rc geninfo_all_blocks=1 00:07:53.688 --rc geninfo_unexecuted_blocks=1 00:07:53.688 00:07:53.688 ' 00:07:53.688 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:53.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.688 --rc genhtml_branch_coverage=1 00:07:53.688 --rc genhtml_function_coverage=1 00:07:53.688 --rc genhtml_legend=1 00:07:53.688 --rc geninfo_all_blocks=1 00:07:53.688 --rc geninfo_unexecuted_blocks=1 00:07:53.688 00:07:53.688 ' 00:07:53.688 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:53.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.688 --rc genhtml_branch_coverage=1 00:07:53.688 --rc genhtml_function_coverage=1 00:07:53.688 --rc genhtml_legend=1 00:07:53.688 --rc geninfo_all_blocks=1 00:07:53.688 --rc geninfo_unexecuted_blocks=1 00:07:53.688 00:07:53.688 ' 00:07:53.688 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:53.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.688 --rc genhtml_branch_coverage=1 00:07:53.688 --rc genhtml_function_coverage=1 00:07:53.688 --rc genhtml_legend=1 00:07:53.688 --rc geninfo_all_blocks=1 00:07:53.688 --rc geninfo_unexecuted_blocks=1 00:07:53.688 00:07:53.688 ' 00:07:53.688 15:34:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:53.688 15:34:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57906 00:07:53.688 15:34:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:53.688 15:34:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57906 00:07:53.688 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57906 ']' 00:07:53.688 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.688 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.688 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.688 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.688 15:34:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:53.946 [2024-12-06 15:34:37.057536] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:07:53.946 [2024-12-06 15:34:37.057938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57906 ] 00:07:54.204 [2024-12-06 15:34:37.244594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.204 [2024-12-06 15:34:37.398144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.584 15:34:38 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.584 15:34:38 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:55.584 15:34:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:55.584 15:34:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:55.584 15:34:38 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.584 15:34:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:55.584 { 00:07:55.584 "filename": "/tmp/spdk_mem_dump.txt" 00:07:55.584 } 00:07:55.584 15:34:38 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.584 15:34:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:55.584 DPDK memory size 824.000000 MiB in 1 heap(s) 00:07:55.584 1 heaps totaling size 824.000000 MiB 00:07:55.584 size: 824.000000 MiB heap id: 0 00:07:55.584 end heaps---------- 00:07:55.584 9 mempools totaling size 603.782043 MiB 00:07:55.584 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:55.584 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:55.584 size: 100.555481 MiB name: bdev_io_57906 00:07:55.584 size: 50.003479 MiB name: msgpool_57906 00:07:55.584 size: 36.509338 MiB name: fsdev_io_57906 00:07:55.584 size: 21.763794 MiB name: PDU_Pool 00:07:55.584 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:55.584 size: 4.133484 MiB name: evtpool_57906 00:07:55.584 size: 0.026123 MiB name: Session_Pool 00:07:55.584 end mempools------- 00:07:55.584 6 memzones totaling size 4.142822 MiB 00:07:55.584 size: 1.000366 MiB name: RG_ring_0_57906 00:07:55.584 size: 1.000366 MiB name: RG_ring_1_57906 00:07:55.584 size: 1.000366 MiB name: RG_ring_4_57906 00:07:55.584 size: 1.000366 MiB name: RG_ring_5_57906 00:07:55.584 size: 0.125366 MiB name: RG_ring_2_57906 00:07:55.584 size: 0.015991 MiB name: RG_ring_3_57906 00:07:55.584 end memzones------- 00:07:55.584 15:34:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:55.584 heap id: 0 total size: 824.000000 MiB number of busy elements: 324 number of free elements: 18 00:07:55.584 list of free elements. size: 16.779175 MiB 00:07:55.584 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:55.584 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:55.584 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:55.584 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:55.584 element at address: 0x200019900040 with size: 0.999939 MiB 00:07:55.584 element at address: 0x200019a00000 with size: 0.999084 MiB 00:07:55.584 element at address: 0x200032600000 with size: 0.994324 MiB 00:07:55.584 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:55.584 element at address: 0x200019200000 with size: 0.959656 MiB 00:07:55.584 element at address: 0x200019d00040 with size: 0.936401 MiB 00:07:55.584 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:55.584 element at address: 0x20001b400000 with size: 0.560486 MiB 00:07:55.584 element at address: 0x200000c00000 with size: 0.489197 MiB 00:07:55.584 element at address: 0x200019600000 with size: 0.487976 MiB 00:07:55.584 element at address: 0x200019e00000 with size: 0.485413 MiB 00:07:55.584 element at address: 0x200012c00000 with size: 0.433472 MiB 00:07:55.584 element at address: 0x200028800000 with size: 0.390442 MiB 00:07:55.584 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:55.584 list of standard malloc elements. size: 199.289917 MiB 00:07:55.584 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:55.584 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:55.584 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:55.584 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:55.584 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:07:55.584 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:55.584 element at address: 0x200019deff40 with size: 0.062683 MiB 00:07:55.584 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:55.584 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:55.584 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:07:55.584 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:55.584 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:55.584 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:55.584 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:55.584 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:55.584 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:55.584 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:55.584 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:55.584 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:55.584 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:55.584 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:55.584 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:55.584 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:55.584 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:55.584 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200019affc40 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:07:55.585 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:07:55.586 element at address: 0x200028863f40 with size: 0.000244 MiB 00:07:55.586 element at address: 0x200028864040 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886af80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886b080 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886b180 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886b280 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886b380 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886b480 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886b580 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886b680 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886b780 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886b880 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886b980 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886be80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886c080 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886c180 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886c280 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886c380 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886c480 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886c580 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886c680 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886c780 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886c880 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886c980 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886d080 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886d180 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886d280 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886d380 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886d480 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886d580 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886d680 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886d780 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886d880 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886d980 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886da80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886db80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886de80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886df80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886e080 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886e180 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886e280 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886e380 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886e480 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886e580 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886e680 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886e780 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886e880 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886e980 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886f080 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886f180 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886f280 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886f380 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886f480 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886f580 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886f680 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886f780 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886f880 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886f980 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:07:55.586 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:07:55.586 list of memzone associated elements. size: 607.930908 MiB 00:07:55.586 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:07:55.586 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:55.587 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:07:55.587 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:55.587 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:07:55.587 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57906_0 00:07:55.587 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:55.587 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57906_0 00:07:55.587 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:55.587 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57906_0 00:07:55.587 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:07:55.587 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:55.587 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:07:55.587 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:55.587 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:55.587 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57906_0 00:07:55.587 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:55.587 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57906 00:07:55.587 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:55.587 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57906 00:07:55.587 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:07:55.587 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:55.587 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:07:55.587 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:55.587 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:55.587 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:55.587 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:07:55.587 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:55.587 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:55.587 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57906 00:07:55.587 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:55.587 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57906 00:07:55.587 element at address: 0x200019affd40 with size: 1.000549 MiB 00:07:55.587 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57906 00:07:55.587 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:07:55.587 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57906 00:07:55.587 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:55.587 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57906 00:07:55.587 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:55.587 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57906 00:07:55.587 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:07:55.587 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:55.587 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:07:55.587 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:55.587 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:07:55.587 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:55.587 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:55.587 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57906 00:07:55.587 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:55.587 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57906 00:07:55.587 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:07:55.587 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:55.587 element at address: 0x200028864140 with size: 0.023804 MiB 00:07:55.587 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:55.587 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:55.587 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57906 00:07:55.587 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:07:55.587 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:55.587 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:55.587 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57906 00:07:55.587 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:55.587 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57906 00:07:55.587 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:55.587 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57906 00:07:55.587 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:07:55.587 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:55.587 15:34:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:55.587 15:34:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57906 00:07:55.587 15:34:38 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57906 ']' 00:07:55.587 15:34:38 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57906 00:07:55.587 15:34:38 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:55.587 15:34:38 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.587 15:34:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57906 00:07:55.587 15:34:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.587 15:34:38 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.587 15:34:38 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57906' 00:07:55.587 killing process with pid 57906 00:07:55.587 15:34:38 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57906 00:07:55.587 15:34:38 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57906 00:07:58.120 00:07:58.120 real 0m4.535s 00:07:58.120 user 0m4.216s 00:07:58.120 sys 0m0.800s 00:07:58.120 15:34:41 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.120 ************************************ 00:07:58.120 END TEST dpdk_mem_utility 00:07:58.120 ************************************ 00:07:58.120 15:34:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:58.120 15:34:41 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:58.120 15:34:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.120 15:34:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.120 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:07:58.120 ************************************ 00:07:58.120 START TEST event 00:07:58.120 ************************************ 00:07:58.120 15:34:41 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:58.378 * Looking for test storage... 00:07:58.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:58.378 15:34:41 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:58.378 15:34:41 event -- common/autotest_common.sh@1711 -- # lcov --version 00:07:58.378 15:34:41 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:58.378 15:34:41 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:58.378 15:34:41 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.378 15:34:41 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.378 15:34:41 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.378 15:34:41 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.378 15:34:41 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.378 15:34:41 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.378 15:34:41 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.378 15:34:41 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.378 15:34:41 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.378 15:34:41 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.378 15:34:41 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.378 15:34:41 event -- scripts/common.sh@344 -- # case "$op" in 00:07:58.378 15:34:41 event -- scripts/common.sh@345 -- # : 1 00:07:58.378 15:34:41 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.378 15:34:41 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.378 15:34:41 event -- scripts/common.sh@365 -- # decimal 1 00:07:58.378 15:34:41 event -- scripts/common.sh@353 -- # local d=1 00:07:58.378 15:34:41 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.378 15:34:41 event -- scripts/common.sh@355 -- # echo 1 00:07:58.378 15:34:41 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.378 15:34:41 event -- scripts/common.sh@366 -- # decimal 2 00:07:58.378 15:34:41 event -- scripts/common.sh@353 -- # local d=2 00:07:58.378 15:34:41 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.378 15:34:41 event -- scripts/common.sh@355 -- # echo 2 00:07:58.378 15:34:41 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.378 15:34:41 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.378 15:34:41 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.378 15:34:41 event -- scripts/common.sh@368 -- # return 0 00:07:58.378 15:34:41 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.378 15:34:41 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:58.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.378 --rc genhtml_branch_coverage=1 00:07:58.378 --rc genhtml_function_coverage=1 00:07:58.378 --rc genhtml_legend=1 00:07:58.378 --rc geninfo_all_blocks=1 00:07:58.378 --rc geninfo_unexecuted_blocks=1 00:07:58.378 00:07:58.378 ' 00:07:58.378 15:34:41 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:58.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.378 --rc genhtml_branch_coverage=1 00:07:58.378 --rc genhtml_function_coverage=1 00:07:58.378 --rc genhtml_legend=1 00:07:58.378 --rc geninfo_all_blocks=1 00:07:58.378 --rc geninfo_unexecuted_blocks=1 00:07:58.378 00:07:58.378 ' 00:07:58.378 15:34:41 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:58.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.378 --rc genhtml_branch_coverage=1 00:07:58.378 --rc genhtml_function_coverage=1 00:07:58.378 --rc genhtml_legend=1 00:07:58.378 --rc geninfo_all_blocks=1 00:07:58.378 --rc geninfo_unexecuted_blocks=1 00:07:58.378 00:07:58.378 ' 00:07:58.378 15:34:41 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:58.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.378 --rc genhtml_branch_coverage=1 00:07:58.378 --rc genhtml_function_coverage=1 00:07:58.378 --rc genhtml_legend=1 00:07:58.378 --rc geninfo_all_blocks=1 00:07:58.378 --rc geninfo_unexecuted_blocks=1 00:07:58.378 00:07:58.378 ' 00:07:58.378 15:34:41 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:58.378 15:34:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:58.378 15:34:41 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:58.378 15:34:41 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:58.378 15:34:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.378 15:34:41 event -- common/autotest_common.sh@10 -- # set +x 00:07:58.378 ************************************ 00:07:58.378 START TEST event_perf 00:07:58.378 ************************************ 00:07:58.378 15:34:41 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:58.378 Running I/O for 1 seconds...[2024-12-06 15:34:41.610858] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:07:58.378 [2024-12-06 15:34:41.611081] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58020 ] 00:07:58.637 [2024-12-06 15:34:41.799374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.894 [2024-12-06 15:34:41.956412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.894 [2024-12-06 15:34:41.956551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.894 Running I/O for 1 seconds...[2024-12-06 15:34:41.957367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.894 [2024-12-06 15:34:41.957392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.893 00:07:59.893 lcore 0: 213902 00:07:59.893 lcore 1: 213901 00:07:59.893 lcore 2: 213902 00:07:59.893 lcore 3: 213902 00:08:00.151 done. 00:08:00.151 ************************************ 00:08:00.151 END TEST event_perf 00:08:00.151 ************************************ 00:08:00.151 00:08:00.151 real 0m1.657s 00:08:00.151 user 0m4.397s 00:08:00.151 sys 0m0.138s 00:08:00.151 15:34:43 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.151 15:34:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:00.151 15:34:43 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:00.151 15:34:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:00.151 15:34:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.151 15:34:43 event -- common/autotest_common.sh@10 -- # set +x 00:08:00.151 ************************************ 00:08:00.151 START TEST event_reactor 00:08:00.151 ************************************ 00:08:00.151 15:34:43 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:00.151 [2024-12-06 15:34:43.339765] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:08:00.151 [2024-12-06 15:34:43.340459] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58059 ] 00:08:00.408 [2024-12-06 15:34:43.524629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.408 [2024-12-06 15:34:43.661955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.784 test_start 00:08:01.784 oneshot 00:08:01.784 tick 100 00:08:01.784 tick 100 00:08:01.784 tick 250 00:08:01.784 tick 100 00:08:01.784 tick 100 00:08:01.784 tick 100 00:08:01.784 tick 250 00:08:01.784 tick 500 00:08:01.784 tick 100 00:08:01.784 tick 100 00:08:01.784 tick 250 00:08:01.784 tick 100 00:08:01.784 tick 100 00:08:01.784 test_end 00:08:01.784 ************************************ 00:08:01.784 END TEST event_reactor 00:08:01.784 ************************************ 00:08:01.784 00:08:01.784 real 0m1.612s 00:08:01.784 user 0m1.387s 00:08:01.784 sys 0m0.116s 00:08:01.784 15:34:44 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.784 15:34:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:01.784 15:34:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:01.784 15:34:44 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:01.784 15:34:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.784 15:34:44 event -- common/autotest_common.sh@10 -- # set +x 00:08:01.784 ************************************ 00:08:01.784 START TEST event_reactor_perf 00:08:01.784 ************************************ 00:08:01.784 15:34:44 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:01.784 [2024-12-06 15:34:45.025220] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:08:01.784 [2024-12-06 15:34:45.025480] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58096 ] 00:08:02.042 [2024-12-06 15:34:45.211984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.299 [2024-12-06 15:34:45.353752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.680 test_start 00:08:03.680 test_end 00:08:03.680 Performance: 382164 events per second 00:08:03.680 00:08:03.680 real 0m1.623s 00:08:03.680 user 0m1.388s 00:08:03.680 sys 0m0.127s 00:08:03.680 ************************************ 00:08:03.680 END TEST event_reactor_perf 00:08:03.680 15:34:46 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.680 15:34:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:03.680 ************************************ 00:08:03.680 15:34:46 event -- event/event.sh@49 -- # uname -s 00:08:03.680 15:34:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:03.680 15:34:46 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:03.680 15:34:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.680 15:34:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.680 15:34:46 event -- common/autotest_common.sh@10 -- # set +x 00:08:03.680 ************************************ 00:08:03.680 START TEST event_scheduler 00:08:03.680 ************************************ 00:08:03.680 15:34:46 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:03.680 * Looking for test storage... 00:08:03.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:03.680 15:34:46 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:03.680 15:34:46 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:08:03.680 15:34:46 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:03.680 15:34:46 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.680 15:34:46 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:03.680 15:34:46 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.680 15:34:46 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:03.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.681 --rc genhtml_branch_coverage=1 00:08:03.681 --rc genhtml_function_coverage=1 00:08:03.681 --rc genhtml_legend=1 00:08:03.681 --rc geninfo_all_blocks=1 00:08:03.681 --rc geninfo_unexecuted_blocks=1 00:08:03.681 00:08:03.681 ' 00:08:03.681 15:34:46 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:03.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.681 --rc genhtml_branch_coverage=1 00:08:03.681 --rc genhtml_function_coverage=1 00:08:03.681 --rc genhtml_legend=1 00:08:03.681 --rc geninfo_all_blocks=1 00:08:03.681 --rc geninfo_unexecuted_blocks=1 00:08:03.681 00:08:03.681 ' 00:08:03.681 15:34:46 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:03.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.681 --rc genhtml_branch_coverage=1 00:08:03.681 --rc genhtml_function_coverage=1 00:08:03.681 --rc genhtml_legend=1 00:08:03.681 --rc geninfo_all_blocks=1 00:08:03.681 --rc geninfo_unexecuted_blocks=1 00:08:03.681 00:08:03.681 ' 00:08:03.681 15:34:46 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:03.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.681 --rc genhtml_branch_coverage=1 00:08:03.681 --rc genhtml_function_coverage=1 00:08:03.681 --rc genhtml_legend=1 00:08:03.681 --rc geninfo_all_blocks=1 00:08:03.681 --rc geninfo_unexecuted_blocks=1 00:08:03.681 00:08:03.681 ' 00:08:03.681 15:34:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:03.681 15:34:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58172 00:08:03.681 15:34:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:03.681 15:34:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:03.681 15:34:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58172 00:08:03.681 15:34:46 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58172 ']' 00:08:03.681 15:34:46 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.681 15:34:46 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.681 15:34:46 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.681 15:34:46 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.681 15:34:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:03.946 [2024-12-06 15:34:47.011559] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:08:03.946 [2024-12-06 15:34:47.011967] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58172 ] 00:08:03.946 [2024-12-06 15:34:47.195353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.210 [2024-12-06 15:34:47.347640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.210 [2024-12-06 15:34:47.347837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.210 [2024-12-06 15:34:47.349192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.210 [2024-12-06 15:34:47.349216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.789 15:34:47 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.789 15:34:47 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:04.789 15:34:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:04.789 15:34:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.789 15:34:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:04.789 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:04.789 POWER: Cannot set governor of lcore 0 to userspace 00:08:04.789 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:04.789 POWER: Cannot set governor of lcore 0 to performance 00:08:04.789 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:04.789 POWER: Cannot set governor of lcore 0 to userspace 00:08:04.789 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:04.789 POWER: Cannot set governor of lcore 0 to userspace 00:08:04.789 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:04.789 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:04.789 POWER: Unable to set Power Management Environment for lcore 0 00:08:04.789 [2024-12-06 15:34:47.878564] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:04.789 [2024-12-06 15:34:47.878595] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:04.789 [2024-12-06 15:34:47.878610] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:04.789 [2024-12-06 15:34:47.878639] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:04.789 [2024-12-06 15:34:47.878651] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:04.789 [2024-12-06 15:34:47.878665] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:04.789 15:34:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.789 15:34:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:04.789 15:34:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.789 15:34:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:05.056 [2024-12-06 15:34:48.287922] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:05.056 15:34:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.056 15:34:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:05.056 15:34:48 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.056 15:34:48 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.056 15:34:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:05.056 ************************************ 00:08:05.056 START TEST scheduler_create_thread 00:08:05.056 ************************************ 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.056 2 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.056 3 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.056 4 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.056 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.314 5 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.314 6 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.314 7 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.314 8 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.314 9 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.314 10 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.314 15:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.715 15:34:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.715 15:34:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:06.715 15:34:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:06.715 15:34:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.715 15:34:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:07.283 15:34:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.283 15:34:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:07.283 15:34:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.283 15:34:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:08.220 15:34:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.220 15:34:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:08.220 15:34:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:08.220 15:34:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.220 15:34:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.158 15:34:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.158 00:08:09.158 real 0m3.887s 00:08:09.158 user 0m0.025s 00:08:09.158 sys 0m0.010s 00:08:09.158 15:34:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.158 15:34:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.158 ************************************ 00:08:09.158 END TEST scheduler_create_thread 00:08:09.158 ************************************ 00:08:09.158 15:34:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:09.158 15:34:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58172 00:08:09.158 15:34:52 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58172 ']' 00:08:09.158 15:34:52 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58172 00:08:09.158 15:34:52 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:09.158 15:34:52 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.158 15:34:52 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58172 00:08:09.158 15:34:52 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:09.158 killing process with pid 58172 00:08:09.158 15:34:52 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:09.158 15:34:52 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58172' 00:08:09.158 15:34:52 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58172 00:08:09.158 15:34:52 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58172 00:08:09.417 [2024-12-06 15:34:52.569240] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:10.856 00:08:10.856 real 0m7.173s 00:08:10.856 user 0m14.590s 00:08:10.856 sys 0m0.652s 00:08:10.856 15:34:53 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.856 ************************************ 00:08:10.856 END TEST event_scheduler 00:08:10.856 ************************************ 00:08:10.856 15:34:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:10.856 15:34:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:10.856 15:34:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:10.856 15:34:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.856 15:34:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.856 15:34:53 event -- common/autotest_common.sh@10 -- # set +x 00:08:10.856 ************************************ 00:08:10.856 START TEST app_repeat 00:08:10.856 ************************************ 00:08:10.856 15:34:53 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:10.856 15:34:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.856 15:34:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:10.856 15:34:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:10.856 15:34:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:10.856 15:34:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:10.856 15:34:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:10.856 15:34:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:10.856 15:34:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58300 00:08:10.856 15:34:53 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:10.856 15:34:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:10.856 15:34:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58300' 00:08:10.856 Process app_repeat pid: 58300 00:08:10.856 15:34:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:10.856 spdk_app_start Round 0 00:08:10.856 15:34:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:10.856 15:34:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58300 /var/tmp/spdk-nbd.sock 00:08:10.856 15:34:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58300 ']' 00:08:10.856 15:34:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:10.856 15:34:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:10.856 15:34:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:10.856 15:34:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.856 15:34:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:10.856 [2024-12-06 15:34:54.011048] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:08:10.856 [2024-12-06 15:34:54.011188] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58300 ] 00:08:11.116 [2024-12-06 15:34:54.200697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:11.116 [2024-12-06 15:34:54.355542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.116 [2024-12-06 15:34:54.355590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.684 15:34:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.684 15:34:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:11.684 15:34:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:11.942 Malloc0 00:08:11.942 15:34:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:12.508 Malloc1 00:08:12.508 15:34:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:12.509 /dev/nbd0 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:12.509 15:34:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:12.509 15:34:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:12.509 15:34:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:12.509 15:34:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:12.509 15:34:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:12.509 15:34:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:12.509 15:34:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:12.509 15:34:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:12.509 15:34:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:12.509 15:34:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:12.509 1+0 records in 00:08:12.509 1+0 records out 00:08:12.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338443 s, 12.1 MB/s 00:08:12.509 15:34:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:12.509 15:34:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:12.509 15:34:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:12.767 15:34:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:12.767 15:34:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:12.767 15:34:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:12.767 15:34:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:12.767 15:34:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:12.767 /dev/nbd1 00:08:12.767 15:34:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:12.767 15:34:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:12.767 15:34:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:12.767 15:34:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:12.767 15:34:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:12.767 15:34:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:12.767 15:34:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:12.767 15:34:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:13.025 15:34:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:13.025 15:34:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:13.025 15:34:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:13.025 1+0 records in 00:08:13.025 1+0 records out 00:08:13.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334381 s, 12.2 MB/s 00:08:13.025 15:34:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:13.025 15:34:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:13.025 15:34:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:13.025 15:34:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:13.025 15:34:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:13.025 15:34:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:13.025 15:34:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:13.025 15:34:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:13.025 15:34:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.025 15:34:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:13.283 { 00:08:13.283 "nbd_device": "/dev/nbd0", 00:08:13.283 "bdev_name": "Malloc0" 00:08:13.283 }, 00:08:13.283 { 00:08:13.283 "nbd_device": "/dev/nbd1", 00:08:13.283 "bdev_name": "Malloc1" 00:08:13.283 } 00:08:13.283 ]' 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:13.283 { 00:08:13.283 "nbd_device": "/dev/nbd0", 00:08:13.283 "bdev_name": "Malloc0" 00:08:13.283 }, 00:08:13.283 { 00:08:13.283 "nbd_device": "/dev/nbd1", 00:08:13.283 "bdev_name": "Malloc1" 00:08:13.283 } 00:08:13.283 ]' 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:13.283 /dev/nbd1' 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:13.283 /dev/nbd1' 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:13.283 256+0 records in 00:08:13.283 256+0 records out 00:08:13.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00611561 s, 171 MB/s 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:13.283 256+0 records in 00:08:13.283 256+0 records out 00:08:13.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249592 s, 42.0 MB/s 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:13.283 256+0 records in 00:08:13.283 256+0 records out 00:08:13.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0389562 s, 26.9 MB/s 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:13.283 15:34:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:13.284 15:34:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:13.284 15:34:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:13.284 15:34:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:13.284 15:34:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:13.284 15:34:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:13.284 15:34:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.284 15:34:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:13.284 15:34:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:13.284 15:34:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:13.284 15:34:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.284 15:34:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:13.542 15:34:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:13.542 15:34:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:13.542 15:34:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:13.542 15:34:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.542 15:34:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.542 15:34:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:13.542 15:34:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:13.542 15:34:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.542 15:34:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.542 15:34:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:13.801 15:34:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:13.801 15:34:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:13.801 15:34:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:13.801 15:34:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.801 15:34:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.801 15:34:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:13.801 15:34:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:13.801 15:34:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.801 15:34:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:13.801 15:34:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.801 15:34:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:14.062 15:34:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:14.062 15:34:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:14.062 15:34:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:14.062 15:34:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:14.062 15:34:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:14.062 15:34:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:14.062 15:34:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:14.062 15:34:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:14.062 15:34:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:14.062 15:34:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:14.062 15:34:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:14.062 15:34:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:14.062 15:34:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:14.627 15:34:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:16.003 [2024-12-06 15:34:59.008167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.003 [2024-12-06 15:34:59.151124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.003 [2024-12-06 15:34:59.151124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.266 [2024-12-06 15:34:59.386953] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:16.266 [2024-12-06 15:34:59.387113] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:17.644 15:35:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:17.644 spdk_app_start Round 1 00:08:17.644 15:35:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:17.644 15:35:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58300 /var/tmp/spdk-nbd.sock 00:08:17.644 15:35:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58300 ']' 00:08:17.644 15:35:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:17.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:17.644 15:35:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.644 15:35:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:17.644 15:35:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.644 15:35:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:17.900 15:35:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.900 15:35:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:17.900 15:35:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:18.158 Malloc0 00:08:18.158 15:35:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:18.445 Malloc1 00:08:18.445 15:35:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:18.445 15:35:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.445 15:35:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:18.445 15:35:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:18.445 15:35:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:18.445 15:35:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:18.445 15:35:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:18.445 15:35:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.445 15:35:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:18.445 15:35:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:18.445 15:35:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:18.445 15:35:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:18.445 15:35:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:18.445 15:35:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:18.445 15:35:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:18.445 15:35:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:18.705 /dev/nbd0 00:08:18.705 15:35:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:18.705 15:35:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:18.705 15:35:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:18.705 15:35:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:18.705 15:35:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:18.705 15:35:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:18.705 15:35:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:18.705 15:35:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:18.705 15:35:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:18.705 15:35:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:18.705 15:35:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:18.705 1+0 records in 00:08:18.705 1+0 records out 00:08:18.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442752 s, 9.3 MB/s 00:08:18.705 15:35:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.705 15:35:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:18.705 15:35:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.705 15:35:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:18.705 15:35:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:18.705 15:35:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.705 15:35:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:18.705 15:35:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:18.964 /dev/nbd1 00:08:18.964 15:35:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:18.964 15:35:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:18.964 15:35:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:18.964 15:35:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:18.964 15:35:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:18.964 15:35:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:18.964 15:35:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:18.964 15:35:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:18.964 15:35:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:18.964 15:35:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:18.964 15:35:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:18.964 1+0 records in 00:08:18.964 1+0 records out 00:08:18.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463427 s, 8.8 MB/s 00:08:18.964 15:35:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.964 15:35:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:18.964 15:35:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.964 15:35:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:18.964 15:35:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:18.964 15:35:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.964 15:35:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:18.964 15:35:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:18.964 15:35:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.964 15:35:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:19.223 { 00:08:19.223 "nbd_device": "/dev/nbd0", 00:08:19.223 "bdev_name": "Malloc0" 00:08:19.223 }, 00:08:19.223 { 00:08:19.223 "nbd_device": "/dev/nbd1", 00:08:19.223 "bdev_name": "Malloc1" 00:08:19.223 } 00:08:19.223 ]' 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:19.223 { 00:08:19.223 "nbd_device": "/dev/nbd0", 00:08:19.223 "bdev_name": "Malloc0" 00:08:19.223 }, 00:08:19.223 { 00:08:19.223 "nbd_device": "/dev/nbd1", 00:08:19.223 "bdev_name": "Malloc1" 00:08:19.223 } 00:08:19.223 ]' 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:19.223 /dev/nbd1' 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:19.223 /dev/nbd1' 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:19.223 256+0 records in 00:08:19.223 256+0 records out 00:08:19.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00545302 s, 192 MB/s 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:19.223 256+0 records in 00:08:19.223 256+0 records out 00:08:19.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299066 s, 35.1 MB/s 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:19.223 256+0 records in 00:08:19.223 256+0 records out 00:08:19.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0342209 s, 30.6 MB/s 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:19.223 15:35:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.482 15:35:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:19.742 15:35:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:19.742 15:35:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:19.742 15:35:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:19.742 15:35:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.742 15:35:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.742 15:35:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:19.742 15:35:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:19.742 15:35:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.742 15:35:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:19.742 15:35:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.742 15:35:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:20.001 15:35:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:20.001 15:35:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:20.001 15:35:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:20.259 15:35:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:20.259 15:35:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:20.259 15:35:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:20.259 15:35:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:20.259 15:35:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:20.259 15:35:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:20.259 15:35:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:20.259 15:35:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:20.259 15:35:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:20.259 15:35:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:20.518 15:35:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:21.896 [2024-12-06 15:35:05.095408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:22.154 [2024-12-06 15:35:05.235429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.154 [2024-12-06 15:35:05.235447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.412 [2024-12-06 15:35:05.459005] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:22.412 [2024-12-06 15:35:05.459152] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:23.792 15:35:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:23.792 spdk_app_start Round 2 00:08:23.792 15:35:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:23.792 15:35:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58300 /var/tmp/spdk-nbd.sock 00:08:23.792 15:35:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58300 ']' 00:08:23.792 15:35:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:23.792 15:35:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:23.792 15:35:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:23.792 15:35:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.792 15:35:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:23.792 15:35:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.792 15:35:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:23.792 15:35:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.358 Malloc0 00:08:24.358 15:35:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.658 Malloc1 00:08:24.658 15:35:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.658 15:35:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.658 15:35:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.658 15:35:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:24.658 15:35:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.658 15:35:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:24.658 15:35:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.658 15:35:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.658 15:35:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.658 15:35:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:24.658 15:35:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.658 15:35:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:24.658 15:35:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:24.658 15:35:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:24.658 15:35:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.658 15:35:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:24.969 /dev/nbd0 00:08:24.969 15:35:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:24.969 15:35:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:24.969 15:35:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:24.969 15:35:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:24.969 15:35:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:24.969 15:35:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:24.969 15:35:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:24.969 15:35:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:24.969 15:35:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:24.969 15:35:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:24.969 15:35:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:24.969 1+0 records in 00:08:24.969 1+0 records out 00:08:24.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266315 s, 15.4 MB/s 00:08:24.969 15:35:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.969 15:35:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:24.969 15:35:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.969 15:35:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:24.969 15:35:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:24.969 15:35:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:24.969 15:35:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.969 15:35:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:24.969 /dev/nbd1 00:08:25.241 15:35:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:25.241 15:35:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:25.241 15:35:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:25.241 15:35:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:25.241 15:35:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:25.241 15:35:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:25.241 15:35:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:25.241 15:35:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:25.241 15:35:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:25.241 15:35:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:25.241 15:35:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:25.241 1+0 records in 00:08:25.241 1+0 records out 00:08:25.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529988 s, 7.7 MB/s 00:08:25.241 15:35:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.241 15:35:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:25.241 15:35:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.241 15:35:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:25.241 15:35:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:25.241 15:35:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.241 15:35:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:25.241 15:35:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:25.241 15:35:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.241 15:35:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.241 15:35:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:25.241 { 00:08:25.241 "nbd_device": "/dev/nbd0", 00:08:25.241 "bdev_name": "Malloc0" 00:08:25.241 }, 00:08:25.241 { 00:08:25.241 "nbd_device": "/dev/nbd1", 00:08:25.241 "bdev_name": "Malloc1" 00:08:25.241 } 00:08:25.241 ]' 00:08:25.241 15:35:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:25.241 { 00:08:25.241 "nbd_device": "/dev/nbd0", 00:08:25.241 "bdev_name": "Malloc0" 00:08:25.241 }, 00:08:25.241 { 00:08:25.241 "nbd_device": "/dev/nbd1", 00:08:25.241 "bdev_name": "Malloc1" 00:08:25.241 } 00:08:25.241 ]' 00:08:25.241 15:35:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:25.500 15:35:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:25.500 /dev/nbd1' 00:08:25.500 15:35:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:25.500 /dev/nbd1' 00:08:25.500 15:35:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:25.500 15:35:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:25.500 15:35:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:25.500 15:35:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:25.501 256+0 records in 00:08:25.501 256+0 records out 00:08:25.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124375 s, 84.3 MB/s 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:25.501 256+0 records in 00:08:25.501 256+0 records out 00:08:25.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261433 s, 40.1 MB/s 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:25.501 256+0 records in 00:08:25.501 256+0 records out 00:08:25.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0349455 s, 30.0 MB/s 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.501 15:35:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:25.759 15:35:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:25.759 15:35:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:25.759 15:35:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:25.759 15:35:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.759 15:35:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.759 15:35:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:25.759 15:35:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:25.759 15:35:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.759 15:35:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.759 15:35:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:26.018 15:35:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:26.018 15:35:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:26.018 15:35:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:26.018 15:35:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.018 15:35:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.018 15:35:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:26.018 15:35:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:26.018 15:35:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.018 15:35:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.018 15:35:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.277 15:35:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.537 15:35:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:26.537 15:35:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:26.537 15:35:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.537 15:35:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:26.537 15:35:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:26.537 15:35:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.537 15:35:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:26.537 15:35:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:26.537 15:35:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:26.537 15:35:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:26.537 15:35:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:26.537 15:35:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:26.537 15:35:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:27.105 15:35:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:28.482 [2024-12-06 15:35:11.422877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:28.482 [2024-12-06 15:35:11.576578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.482 [2024-12-06 15:35:11.576580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.740 [2024-12-06 15:35:11.813960] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:28.740 [2024-12-06 15:35:11.814124] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:30.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:30.119 15:35:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58300 /var/tmp/spdk-nbd.sock 00:08:30.119 15:35:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58300 ']' 00:08:30.119 15:35:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:30.119 15:35:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.119 15:35:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:30.119 15:35:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.119 15:35:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:30.119 15:35:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.119 15:35:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:30.119 15:35:13 event.app_repeat -- event/event.sh@39 -- # killprocess 58300 00:08:30.119 15:35:13 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58300 ']' 00:08:30.119 15:35:13 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58300 00:08:30.119 15:35:13 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:30.119 15:35:13 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.119 15:35:13 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58300 00:08:30.378 killing process with pid 58300 00:08:30.378 15:35:13 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.378 15:35:13 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.378 15:35:13 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58300' 00:08:30.378 15:35:13 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58300 00:08:30.378 15:35:13 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58300 00:08:31.315 spdk_app_start is called in Round 0. 00:08:31.315 Shutdown signal received, stop current app iteration 00:08:31.315 Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 reinitialization... 00:08:31.315 spdk_app_start is called in Round 1. 00:08:31.315 Shutdown signal received, stop current app iteration 00:08:31.315 Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 reinitialization... 00:08:31.315 spdk_app_start is called in Round 2. 00:08:31.315 Shutdown signal received, stop current app iteration 00:08:31.315 Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 reinitialization... 00:08:31.315 spdk_app_start is called in Round 3. 00:08:31.315 Shutdown signal received, stop current app iteration 00:08:31.574 15:35:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:31.574 15:35:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:31.574 00:08:31.574 real 0m20.692s 00:08:31.574 user 0m43.755s 00:08:31.574 sys 0m3.852s 00:08:31.574 ************************************ 00:08:31.574 END TEST app_repeat 00:08:31.574 ************************************ 00:08:31.574 15:35:14 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.574 15:35:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:31.574 15:35:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:31.574 15:35:14 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:31.574 15:35:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.574 15:35:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.574 15:35:14 event -- common/autotest_common.sh@10 -- # set +x 00:08:31.574 ************************************ 00:08:31.574 START TEST cpu_locks 00:08:31.574 ************************************ 00:08:31.574 15:35:14 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:31.574 * Looking for test storage... 00:08:31.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:31.574 15:35:14 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:31.574 15:35:14 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:31.574 15:35:14 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:08:31.833 15:35:14 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.833 15:35:14 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:31.833 15:35:14 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.833 15:35:14 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:31.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.833 --rc genhtml_branch_coverage=1 00:08:31.833 --rc genhtml_function_coverage=1 00:08:31.833 --rc genhtml_legend=1 00:08:31.833 --rc geninfo_all_blocks=1 00:08:31.833 --rc geninfo_unexecuted_blocks=1 00:08:31.833 00:08:31.833 ' 00:08:31.833 15:35:14 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:31.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.833 --rc genhtml_branch_coverage=1 00:08:31.833 --rc genhtml_function_coverage=1 00:08:31.833 --rc genhtml_legend=1 00:08:31.833 --rc geninfo_all_blocks=1 00:08:31.833 --rc geninfo_unexecuted_blocks=1 00:08:31.833 00:08:31.833 ' 00:08:31.833 15:35:14 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:31.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.833 --rc genhtml_branch_coverage=1 00:08:31.833 --rc genhtml_function_coverage=1 00:08:31.833 --rc genhtml_legend=1 00:08:31.833 --rc geninfo_all_blocks=1 00:08:31.833 --rc geninfo_unexecuted_blocks=1 00:08:31.833 00:08:31.833 ' 00:08:31.833 15:35:14 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:31.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.833 --rc genhtml_branch_coverage=1 00:08:31.833 --rc genhtml_function_coverage=1 00:08:31.833 --rc genhtml_legend=1 00:08:31.833 --rc geninfo_all_blocks=1 00:08:31.833 --rc geninfo_unexecuted_blocks=1 00:08:31.833 00:08:31.833 ' 00:08:31.833 15:35:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:31.833 15:35:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:31.833 15:35:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:31.833 15:35:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:31.833 15:35:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.833 15:35:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.833 15:35:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:31.833 ************************************ 00:08:31.833 START TEST default_locks 00:08:31.833 ************************************ 00:08:31.833 15:35:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:31.833 15:35:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:31.833 15:35:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58760 00:08:31.833 15:35:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58760 00:08:31.833 15:35:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58760 ']' 00:08:31.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.833 15:35:14 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.833 15:35:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.833 15:35:14 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.833 15:35:14 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.833 15:35:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:31.833 [2024-12-06 15:35:15.114655] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:08:31.833 [2024-12-06 15:35:15.114836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58760 ] 00:08:32.091 [2024-12-06 15:35:15.310898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.349 [2024-12-06 15:35:15.480961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.725 15:35:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.725 15:35:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:33.725 15:35:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58760 00:08:33.725 15:35:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58760 00:08:33.725 15:35:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:33.983 15:35:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58760 00:08:33.983 15:35:17 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58760 ']' 00:08:33.983 15:35:17 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58760 00:08:33.983 15:35:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:33.983 15:35:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.983 15:35:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58760 00:08:33.983 killing process with pid 58760 00:08:33.983 15:35:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.983 15:35:17 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.983 15:35:17 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58760' 00:08:33.983 15:35:17 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58760 00:08:33.983 15:35:17 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58760 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58760 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58760 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58760 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58760 ']' 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:37.266 ERROR: process (pid: 58760) is no longer running 00:08:37.266 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58760) - No such process 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:37.266 00:08:37.266 real 0m4.899s 00:08:37.266 user 0m4.690s 00:08:37.266 sys 0m0.970s 00:08:37.266 ************************************ 00:08:37.266 END TEST default_locks 00:08:37.266 ************************************ 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.266 15:35:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:37.266 15:35:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:37.266 15:35:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.266 15:35:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.266 15:35:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:37.266 ************************************ 00:08:37.266 START TEST default_locks_via_rpc 00:08:37.266 ************************************ 00:08:37.266 15:35:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:37.266 15:35:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58841 00:08:37.267 15:35:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:37.267 15:35:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58841 00:08:37.267 15:35:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58841 ']' 00:08:37.267 15:35:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.267 15:35:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.267 15:35:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.267 15:35:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.267 15:35:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.267 [2024-12-06 15:35:20.075968] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:08:37.267 [2024-12-06 15:35:20.076130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58841 ] 00:08:37.267 [2024-12-06 15:35:20.251838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.267 [2024-12-06 15:35:20.442034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.203 15:35:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.203 15:35:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:38.203 15:35:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:38.203 15:35:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.203 15:35:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.203 15:35:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.203 15:35:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:38.203 15:35:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:38.203 15:35:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:38.203 15:35:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:38.203 15:35:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:38.203 15:35:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.203 15:35:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.463 15:35:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.463 15:35:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58841 00:08:38.463 15:35:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:38.463 15:35:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58841 00:08:39.031 15:35:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58841 00:08:39.031 15:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58841 ']' 00:08:39.031 15:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58841 00:08:39.031 15:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:39.031 15:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.031 15:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58841 00:08:39.031 killing process with pid 58841 00:08:39.031 15:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.031 15:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.031 15:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58841' 00:08:39.031 15:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58841 00:08:39.031 15:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58841 00:08:41.565 00:08:41.565 real 0m4.813s 00:08:41.565 user 0m4.609s 00:08:41.565 sys 0m0.920s 00:08:41.565 15:35:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.565 15:35:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.565 ************************************ 00:08:41.565 END TEST default_locks_via_rpc 00:08:41.565 ************************************ 00:08:41.565 15:35:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:41.565 15:35:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.565 15:35:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.565 15:35:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:41.565 ************************************ 00:08:41.565 START TEST non_locking_app_on_locked_coremask 00:08:41.565 ************************************ 00:08:41.565 15:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:41.565 15:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58926 00:08:41.565 15:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:41.565 15:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58926 /var/tmp/spdk.sock 00:08:41.565 15:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58926 ']' 00:08:41.565 15:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.565 15:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.565 15:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.565 15:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.565 15:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:41.824 [2024-12-06 15:35:24.942617] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:08:41.824 [2024-12-06 15:35:24.943081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58926 ] 00:08:42.082 [2024-12-06 15:35:25.120824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.082 [2024-12-06 15:35:25.273268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.458 15:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.458 15:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:43.458 15:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58945 00:08:43.458 15:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:43.458 15:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58945 /var/tmp/spdk2.sock 00:08:43.458 15:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58945 ']' 00:08:43.458 15:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:43.458 15:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.458 15:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:43.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:43.458 15:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.458 15:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:43.458 [2024-12-06 15:35:26.443593] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:08:43.458 [2024-12-06 15:35:26.443767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58945 ] 00:08:43.458 [2024-12-06 15:35:26.636707] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:43.458 [2024-12-06 15:35:26.636828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.716 [2024-12-06 15:35:26.951037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.249 15:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.249 15:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:46.249 15:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58926 00:08:46.249 15:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58926 00:08:46.249 15:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:46.814 15:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58926 00:08:46.814 15:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58926 ']' 00:08:46.814 15:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58926 00:08:46.814 15:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:46.814 15:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.814 15:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58926 00:08:46.814 15:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.814 killing process with pid 58926 00:08:46.814 15:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.814 15:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58926' 00:08:46.814 15:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58926 00:08:46.814 15:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58926 00:08:53.401 15:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58945 00:08:53.401 15:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58945 ']' 00:08:53.401 15:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58945 00:08:53.401 15:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:53.401 15:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.402 15:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58945 00:08:53.402 killing process with pid 58945 00:08:53.402 15:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.402 15:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.402 15:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58945' 00:08:53.402 15:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58945 00:08:53.402 15:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58945 00:08:55.302 00:08:55.302 real 0m13.427s 00:08:55.302 user 0m13.502s 00:08:55.302 sys 0m1.891s 00:08:55.302 15:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.302 ************************************ 00:08:55.302 END TEST non_locking_app_on_locked_coremask 00:08:55.302 ************************************ 00:08:55.302 15:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:55.302 15:35:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:55.302 15:35:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.302 15:35:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.302 15:35:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:55.302 ************************************ 00:08:55.302 START TEST locking_app_on_unlocked_coremask 00:08:55.302 ************************************ 00:08:55.302 15:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:55.302 15:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:55.302 15:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59115 00:08:55.302 15:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59115 /var/tmp/spdk.sock 00:08:55.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.302 15:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59115 ']' 00:08:55.302 15:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.302 15:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.302 15:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.302 15:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.302 15:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:55.302 [2024-12-06 15:35:38.457388] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:08:55.302 [2024-12-06 15:35:38.457571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59115 ] 00:08:55.560 [2024-12-06 15:35:38.644930] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:55.560 [2024-12-06 15:35:38.645020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.560 [2024-12-06 15:35:38.795125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.936 15:35:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.936 15:35:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:56.936 15:35:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59131 00:08:56.936 15:35:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59131 /var/tmp/spdk2.sock 00:08:56.936 15:35:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:56.936 15:35:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59131 ']' 00:08:56.936 15:35:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:56.936 15:35:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.936 15:35:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:56.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:56.936 15:35:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.936 15:35:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:56.936 [2024-12-06 15:35:40.029129] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:08:56.936 [2024-12-06 15:35:40.029593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59131 ] 00:08:56.936 [2024-12-06 15:35:40.224320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.502 [2024-12-06 15:35:40.546810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.053 15:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.053 15:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:00.053 15:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59131 00:09:00.053 15:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59131 00:09:00.053 15:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:00.620 15:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59115 00:09:00.620 15:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59115 ']' 00:09:00.620 15:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59115 00:09:00.620 15:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:00.620 15:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.620 15:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59115 00:09:00.620 killing process with pid 59115 00:09:00.620 15:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.620 15:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.620 15:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59115' 00:09:00.620 15:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59115 00:09:00.620 15:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59115 00:09:05.937 15:35:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59131 00:09:05.937 15:35:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59131 ']' 00:09:05.937 15:35:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59131 00:09:05.937 15:35:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:05.937 15:35:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.937 15:35:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59131 00:09:06.198 killing process with pid 59131 00:09:06.198 15:35:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.198 15:35:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.198 15:35:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59131' 00:09:06.198 15:35:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59131 00:09:06.198 15:35:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59131 00:09:08.746 ************************************ 00:09:08.746 END TEST locking_app_on_unlocked_coremask 00:09:08.746 ************************************ 00:09:08.746 00:09:08.746 real 0m13.601s 00:09:08.746 user 0m13.735s 00:09:08.746 sys 0m1.947s 00:09:08.746 15:35:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.746 15:35:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.746 15:35:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:08.746 15:35:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.746 15:35:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.746 15:35:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.746 ************************************ 00:09:08.746 START TEST locking_app_on_locked_coremask 00:09:08.746 ************************************ 00:09:08.746 15:35:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:08.746 15:35:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59301 00:09:08.746 15:35:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:08.746 15:35:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59301 /var/tmp/spdk.sock 00:09:08.746 15:35:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59301 ']' 00:09:08.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.746 15:35:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.746 15:35:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.746 15:35:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.746 15:35:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.746 15:35:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:09.006 [2024-12-06 15:35:52.123629] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:09:09.006 [2024-12-06 15:35:52.123778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59301 ] 00:09:09.264 [2024-12-06 15:35:52.309340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.264 [2024-12-06 15:35:52.456672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.200 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.200 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:10.200 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:10.200 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59317 00:09:10.200 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59317 /var/tmp/spdk2.sock 00:09:10.200 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:10.200 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59317 /var/tmp/spdk2.sock 00:09:10.200 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:10.200 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.200 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:10.200 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.200 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59317 /var/tmp/spdk2.sock 00:09:10.200 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59317 ']' 00:09:10.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:10.200 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:10.200 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.201 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:10.201 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.201 15:35:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:10.458 [2024-12-06 15:35:53.582152] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:09:10.458 [2024-12-06 15:35:53.582288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59317 ] 00:09:10.716 [2024-12-06 15:35:53.769769] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59301 has claimed it. 00:09:10.716 [2024-12-06 15:35:53.769851] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:10.974 ERROR: process (pid: 59317) is no longer running 00:09:10.974 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59317) - No such process 00:09:10.974 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.974 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:10.974 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:10.974 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:10.974 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:10.974 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:10.974 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59301 00:09:10.974 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59301 00:09:10.974 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:11.542 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59301 00:09:11.542 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59301 ']' 00:09:11.542 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59301 00:09:11.542 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:11.542 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.542 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59301 00:09:11.542 killing process with pid 59301 00:09:11.542 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.542 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.542 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59301' 00:09:11.542 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59301 00:09:11.542 15:35:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59301 00:09:14.828 00:09:14.828 real 0m5.437s 00:09:14.828 user 0m5.467s 00:09:14.828 sys 0m1.121s 00:09:14.828 15:35:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.828 ************************************ 00:09:14.828 END TEST locking_app_on_locked_coremask 00:09:14.828 ************************************ 00:09:14.828 15:35:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:14.828 15:35:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:14.828 15:35:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.828 15:35:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.828 15:35:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:14.828 ************************************ 00:09:14.828 START TEST locking_overlapped_coremask 00:09:14.828 ************************************ 00:09:14.828 15:35:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:14.828 15:35:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59392 00:09:14.828 15:35:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59392 /var/tmp/spdk.sock 00:09:14.828 15:35:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:14.828 15:35:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59392 ']' 00:09:14.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.828 15:35:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.828 15:35:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.828 15:35:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.828 15:35:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.828 15:35:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:14.828 [2024-12-06 15:35:57.639795] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:09:14.828 [2024-12-06 15:35:57.639949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59392 ] 00:09:14.828 [2024-12-06 15:35:57.831499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:14.828 [2024-12-06 15:35:57.981596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.828 [2024-12-06 15:35:57.981714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.828 [2024-12-06 15:35:57.981752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.759 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.759 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:15.759 15:35:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:15.759 15:35:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59416 00:09:15.759 15:35:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59416 /var/tmp/spdk2.sock 00:09:15.759 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:15.759 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59416 /var/tmp/spdk2.sock 00:09:15.759 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:15.759 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.759 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:16.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:16.017 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.017 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59416 /var/tmp/spdk2.sock 00:09:16.017 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59416 ']' 00:09:16.017 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:16.017 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.017 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:16.017 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.017 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:16.017 [2024-12-06 15:35:59.165482] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:09:16.017 [2024-12-06 15:35:59.165910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59416 ] 00:09:16.276 [2024-12-06 15:35:59.355689] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59392 has claimed it. 00:09:16.276 [2024-12-06 15:35:59.355786] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:16.535 ERROR: process (pid: 59416) is no longer running 00:09:16.535 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59416) - No such process 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59392 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59392 ']' 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59392 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.535 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59392 00:09:16.793 killing process with pid 59392 00:09:16.793 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.793 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.793 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59392' 00:09:16.793 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59392 00:09:16.793 15:35:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59392 00:09:19.328 ************************************ 00:09:19.328 END TEST locking_overlapped_coremask 00:09:19.328 ************************************ 00:09:19.328 00:09:19.328 real 0m5.009s 00:09:19.328 user 0m13.404s 00:09:19.328 sys 0m0.863s 00:09:19.328 15:36:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.328 15:36:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:19.328 15:36:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:19.328 15:36:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.328 15:36:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.328 15:36:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:19.328 ************************************ 00:09:19.328 START TEST locking_overlapped_coremask_via_rpc 00:09:19.328 ************************************ 00:09:19.328 15:36:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:19.328 15:36:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59481 00:09:19.328 15:36:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59481 /var/tmp/spdk.sock 00:09:19.328 15:36:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59481 ']' 00:09:19.328 15:36:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:19.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.328 15:36:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.328 15:36:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.328 15:36:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.328 15:36:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.328 15:36:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.587 [2024-12-06 15:36:02.738391] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:09:19.587 [2024-12-06 15:36:02.738600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59481 ] 00:09:19.846 [2024-12-06 15:36:02.933412] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:19.846 [2024-12-06 15:36:02.933495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:19.846 [2024-12-06 15:36:03.087625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.846 [2024-12-06 15:36:03.087805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.846 [2024-12-06 15:36:03.087839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.222 15:36:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.222 15:36:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:21.222 15:36:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59509 00:09:21.222 15:36:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:21.222 15:36:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59509 /var/tmp/spdk2.sock 00:09:21.222 15:36:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59509 ']' 00:09:21.222 15:36:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:21.222 15:36:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.222 15:36:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:21.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:21.222 15:36:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.222 15:36:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.222 [2024-12-06 15:36:04.273755] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:09:21.222 [2024-12-06 15:36:04.274192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59509 ] 00:09:21.222 [2024-12-06 15:36:04.467715] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:21.223 [2024-12-06 15:36:04.467807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:21.790 [2024-12-06 15:36:04.807309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.790 [2024-12-06 15:36:04.807403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.790 [2024-12-06 15:36:04.807415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:24.320 15:36:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.320 15:36:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:24.320 15:36:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:24.320 15:36:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.320 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.320 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.320 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:24.320 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:24.320 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:24.320 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.321 [2024-12-06 15:36:07.024749] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59481 has claimed it. 00:09:24.321 request: 00:09:24.321 { 00:09:24.321 "method": "framework_enable_cpumask_locks", 00:09:24.321 "req_id": 1 00:09:24.321 } 00:09:24.321 Got JSON-RPC error response 00:09:24.321 response: 00:09:24.321 { 00:09:24.321 "code": -32603, 00:09:24.321 "message": "Failed to claim CPU core: 2" 00:09:24.321 } 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59481 /var/tmp/spdk.sock 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59481 ']' 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59509 /var/tmp/spdk2.sock 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59509 ']' 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:24.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:24.321 ************************************ 00:09:24.321 END TEST locking_overlapped_coremask_via_rpc 00:09:24.321 ************************************ 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:24.321 00:09:24.321 real 0m4.965s 00:09:24.321 user 0m1.410s 00:09:24.321 sys 0m0.290s 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.321 15:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.321 15:36:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:24.321 15:36:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59481 ]] 00:09:24.321 15:36:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59481 00:09:24.321 15:36:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59481 ']' 00:09:24.321 15:36:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59481 00:09:24.321 15:36:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:24.583 15:36:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.583 15:36:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59481 00:09:24.583 killing process with pid 59481 00:09:24.583 15:36:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.583 15:36:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.583 15:36:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59481' 00:09:24.583 15:36:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59481 00:09:24.583 15:36:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59481 00:09:27.111 15:36:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59509 ]] 00:09:27.111 15:36:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59509 00:09:27.111 15:36:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59509 ']' 00:09:27.111 15:36:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59509 00:09:27.111 15:36:10 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:27.111 15:36:10 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.111 15:36:10 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59509 00:09:27.111 killing process with pid 59509 00:09:27.111 15:36:10 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:27.111 15:36:10 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:27.111 15:36:10 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59509' 00:09:27.111 15:36:10 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59509 00:09:27.111 15:36:10 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59509 00:09:30.400 15:36:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:30.400 15:36:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:30.400 15:36:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59481 ]] 00:09:30.400 15:36:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59481 00:09:30.400 15:36:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59481 ']' 00:09:30.400 Process with pid 59481 is not found 00:09:30.400 15:36:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59481 00:09:30.400 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59481) - No such process 00:09:30.400 15:36:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59481 is not found' 00:09:30.400 15:36:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59509 ]] 00:09:30.400 15:36:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59509 00:09:30.400 15:36:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59509 ']' 00:09:30.400 Process with pid 59509 is not found 00:09:30.400 15:36:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59509 00:09:30.400 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59509) - No such process 00:09:30.400 15:36:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59509 is not found' 00:09:30.400 15:36:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:30.400 ************************************ 00:09:30.400 END TEST cpu_locks 00:09:30.400 ************************************ 00:09:30.400 00:09:30.400 real 0m58.424s 00:09:30.400 user 1m36.535s 00:09:30.400 sys 0m9.663s 00:09:30.400 15:36:13 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.400 15:36:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:30.400 ************************************ 00:09:30.400 END TEST event 00:09:30.400 ************************************ 00:09:30.400 00:09:30.400 real 1m31.874s 00:09:30.400 user 2m42.320s 00:09:30.400 sys 0m14.970s 00:09:30.400 15:36:13 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.400 15:36:13 event -- common/autotest_common.sh@10 -- # set +x 00:09:30.400 15:36:13 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:30.400 15:36:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.400 15:36:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.400 15:36:13 -- common/autotest_common.sh@10 -- # set +x 00:09:30.400 ************************************ 00:09:30.400 START TEST thread 00:09:30.400 ************************************ 00:09:30.400 15:36:13 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:30.400 * Looking for test storage... 00:09:30.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:30.400 15:36:13 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.400 15:36:13 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.400 15:36:13 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.400 15:36:13 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.400 15:36:13 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.400 15:36:13 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.400 15:36:13 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.400 15:36:13 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.400 15:36:13 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.400 15:36:13 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.400 15:36:13 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.400 15:36:13 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.400 15:36:13 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.400 15:36:13 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.400 15:36:13 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.400 15:36:13 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:30.400 15:36:13 thread -- scripts/common.sh@345 -- # : 1 00:09:30.400 15:36:13 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.400 15:36:13 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.400 15:36:13 thread -- scripts/common.sh@365 -- # decimal 1 00:09:30.400 15:36:13 thread -- scripts/common.sh@353 -- # local d=1 00:09:30.400 15:36:13 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.400 15:36:13 thread -- scripts/common.sh@355 -- # echo 1 00:09:30.400 15:36:13 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.400 15:36:13 thread -- scripts/common.sh@366 -- # decimal 2 00:09:30.400 15:36:13 thread -- scripts/common.sh@353 -- # local d=2 00:09:30.400 15:36:13 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.400 15:36:13 thread -- scripts/common.sh@355 -- # echo 2 00:09:30.400 15:36:13 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.400 15:36:13 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.400 15:36:13 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.400 15:36:13 thread -- scripts/common.sh@368 -- # return 0 00:09:30.401 15:36:13 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.401 15:36:13 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.401 --rc genhtml_branch_coverage=1 00:09:30.401 --rc genhtml_function_coverage=1 00:09:30.401 --rc genhtml_legend=1 00:09:30.401 --rc geninfo_all_blocks=1 00:09:30.401 --rc geninfo_unexecuted_blocks=1 00:09:30.401 00:09:30.401 ' 00:09:30.401 15:36:13 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.401 --rc genhtml_branch_coverage=1 00:09:30.401 --rc genhtml_function_coverage=1 00:09:30.401 --rc genhtml_legend=1 00:09:30.401 --rc geninfo_all_blocks=1 00:09:30.401 --rc geninfo_unexecuted_blocks=1 00:09:30.401 00:09:30.401 ' 00:09:30.401 15:36:13 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.401 --rc genhtml_branch_coverage=1 00:09:30.401 --rc genhtml_function_coverage=1 00:09:30.401 --rc genhtml_legend=1 00:09:30.401 --rc geninfo_all_blocks=1 00:09:30.401 --rc geninfo_unexecuted_blocks=1 00:09:30.401 00:09:30.401 ' 00:09:30.401 15:36:13 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.401 --rc genhtml_branch_coverage=1 00:09:30.401 --rc genhtml_function_coverage=1 00:09:30.401 --rc genhtml_legend=1 00:09:30.401 --rc geninfo_all_blocks=1 00:09:30.401 --rc geninfo_unexecuted_blocks=1 00:09:30.401 00:09:30.401 ' 00:09:30.401 15:36:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:30.401 15:36:13 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:30.401 15:36:13 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.401 15:36:13 thread -- common/autotest_common.sh@10 -- # set +x 00:09:30.401 ************************************ 00:09:30.401 START TEST thread_poller_perf 00:09:30.401 ************************************ 00:09:30.401 15:36:13 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:30.401 [2024-12-06 15:36:13.547794] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:09:30.401 [2024-12-06 15:36:13.548312] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59717 ] 00:09:30.677 [2024-12-06 15:36:13.738791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.677 [2024-12-06 15:36:13.880465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.677 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:32.053 [2024-12-06T15:36:15.348Z] ====================================== 00:09:32.053 [2024-12-06T15:36:15.348Z] busy:2501738178 (cyc) 00:09:32.053 [2024-12-06T15:36:15.348Z] total_run_count: 386000 00:09:32.053 [2024-12-06T15:36:15.348Z] tsc_hz: 2490000000 (cyc) 00:09:32.053 [2024-12-06T15:36:15.348Z] ====================================== 00:09:32.053 [2024-12-06T15:36:15.348Z] poller_cost: 6481 (cyc), 2602 (nsec) 00:09:32.053 00:09:32.053 real 0m1.653s 00:09:32.053 user 0m1.410s 00:09:32.053 sys 0m0.132s 00:09:32.053 15:36:15 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.053 15:36:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:32.053 ************************************ 00:09:32.053 END TEST thread_poller_perf 00:09:32.053 ************************************ 00:09:32.053 15:36:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:32.053 15:36:15 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:32.053 15:36:15 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.053 15:36:15 thread -- common/autotest_common.sh@10 -- # set +x 00:09:32.053 ************************************ 00:09:32.053 START TEST thread_poller_perf 00:09:32.053 ************************************ 00:09:32.053 15:36:15 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:32.053 [2024-12-06 15:36:15.280069] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:09:32.053 [2024-12-06 15:36:15.280518] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59748 ] 00:09:32.311 [2024-12-06 15:36:15.466939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.568 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:32.568 [2024-12-06 15:36:15.616394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.943 [2024-12-06T15:36:17.238Z] ====================================== 00:09:33.943 [2024-12-06T15:36:17.238Z] busy:2494511860 (cyc) 00:09:33.943 [2024-12-06T15:36:17.238Z] total_run_count: 4677000 00:09:33.943 [2024-12-06T15:36:17.238Z] tsc_hz: 2490000000 (cyc) 00:09:33.943 [2024-12-06T15:36:17.238Z] ====================================== 00:09:33.943 [2024-12-06T15:36:17.238Z] poller_cost: 533 (cyc), 214 (nsec) 00:09:33.943 ************************************ 00:09:33.943 END TEST thread_poller_perf 00:09:33.943 ************************************ 00:09:33.943 00:09:33.943 real 0m1.652s 00:09:33.943 user 0m1.410s 00:09:33.943 sys 0m0.132s 00:09:33.943 15:36:16 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.943 15:36:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:33.944 15:36:16 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:33.944 ************************************ 00:09:33.944 END TEST thread 00:09:33.944 ************************************ 00:09:33.944 00:09:33.944 real 0m3.685s 00:09:33.944 user 0m3.003s 00:09:33.944 sys 0m0.473s 00:09:33.944 15:36:16 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.944 15:36:16 thread -- common/autotest_common.sh@10 -- # set +x 00:09:33.944 15:36:16 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:33.944 15:36:16 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:33.944 15:36:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.944 15:36:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.944 15:36:17 -- common/autotest_common.sh@10 -- # set +x 00:09:33.944 ************************************ 00:09:33.944 START TEST app_cmdline 00:09:33.944 ************************************ 00:09:33.944 15:36:17 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:33.944 * Looking for test storage... 00:09:33.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:33.944 15:36:17 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:33.944 15:36:17 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:09:33.944 15:36:17 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:34.202 15:36:17 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.202 15:36:17 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:34.202 15:36:17 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.202 15:36:17 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:34.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.202 --rc genhtml_branch_coverage=1 00:09:34.202 --rc genhtml_function_coverage=1 00:09:34.202 --rc genhtml_legend=1 00:09:34.202 --rc geninfo_all_blocks=1 00:09:34.202 --rc geninfo_unexecuted_blocks=1 00:09:34.202 00:09:34.202 ' 00:09:34.202 15:36:17 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:34.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.202 --rc genhtml_branch_coverage=1 00:09:34.202 --rc genhtml_function_coverage=1 00:09:34.202 --rc genhtml_legend=1 00:09:34.202 --rc geninfo_all_blocks=1 00:09:34.202 --rc geninfo_unexecuted_blocks=1 00:09:34.202 00:09:34.202 ' 00:09:34.202 15:36:17 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:34.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.202 --rc genhtml_branch_coverage=1 00:09:34.202 --rc genhtml_function_coverage=1 00:09:34.202 --rc genhtml_legend=1 00:09:34.202 --rc geninfo_all_blocks=1 00:09:34.202 --rc geninfo_unexecuted_blocks=1 00:09:34.202 00:09:34.202 ' 00:09:34.202 15:36:17 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:34.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.202 --rc genhtml_branch_coverage=1 00:09:34.202 --rc genhtml_function_coverage=1 00:09:34.202 --rc genhtml_legend=1 00:09:34.202 --rc geninfo_all_blocks=1 00:09:34.202 --rc geninfo_unexecuted_blocks=1 00:09:34.202 00:09:34.202 ' 00:09:34.202 15:36:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:34.202 15:36:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59837 00:09:34.202 15:36:17 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:34.202 15:36:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59837 00:09:34.202 15:36:17 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59837 ']' 00:09:34.202 15:36:17 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.202 15:36:17 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.202 15:36:17 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.202 15:36:17 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.202 15:36:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:34.202 [2024-12-06 15:36:17.394790] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:09:34.202 [2024-12-06 15:36:17.395225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59837 ] 00:09:34.460 [2024-12-06 15:36:17.581873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.461 [2024-12-06 15:36:17.728010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.835 15:36:18 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.835 15:36:18 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:35.835 15:36:18 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:35.835 { 00:09:35.835 "version": "SPDK v25.01-pre git sha1 a718549f7", 00:09:35.835 "fields": { 00:09:35.835 "major": 25, 00:09:35.835 "minor": 1, 00:09:35.835 "patch": 0, 00:09:35.835 "suffix": "-pre", 00:09:35.835 "commit": "a718549f7" 00:09:35.835 } 00:09:35.835 } 00:09:35.835 15:36:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:35.835 15:36:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:35.835 15:36:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:35.835 15:36:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:35.835 15:36:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:35.835 15:36:19 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.835 15:36:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:35.835 15:36:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:35.835 15:36:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:35.835 15:36:19 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.835 15:36:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:35.835 15:36:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:35.835 15:36:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:35.835 15:36:19 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:35.835 15:36:19 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:35.835 15:36:19 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.835 15:36:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.835 15:36:19 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.835 15:36:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.835 15:36:19 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.836 15:36:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.836 15:36:19 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.836 15:36:19 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:35.836 15:36:19 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:36.095 request: 00:09:36.095 { 00:09:36.095 "method": "env_dpdk_get_mem_stats", 00:09:36.095 "req_id": 1 00:09:36.095 } 00:09:36.095 Got JSON-RPC error response 00:09:36.095 response: 00:09:36.095 { 00:09:36.095 "code": -32601, 00:09:36.095 "message": "Method not found" 00:09:36.095 } 00:09:36.095 15:36:19 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:36.095 15:36:19 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:36.095 15:36:19 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:36.095 15:36:19 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:36.095 15:36:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59837 00:09:36.095 15:36:19 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59837 ']' 00:09:36.095 15:36:19 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59837 00:09:36.095 15:36:19 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:36.095 15:36:19 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.095 15:36:19 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59837 00:09:36.095 killing process with pid 59837 00:09:36.095 15:36:19 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.095 15:36:19 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.095 15:36:19 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59837' 00:09:36.095 15:36:19 app_cmdline -- common/autotest_common.sh@973 -- # kill 59837 00:09:36.095 15:36:19 app_cmdline -- common/autotest_common.sh@978 -- # wait 59837 00:09:39.384 00:09:39.384 real 0m5.083s 00:09:39.384 user 0m5.140s 00:09:39.384 sys 0m0.892s 00:09:39.384 ************************************ 00:09:39.384 END TEST app_cmdline 00:09:39.384 15:36:22 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.384 15:36:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:39.384 ************************************ 00:09:39.384 15:36:22 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:39.384 15:36:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.384 15:36:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.384 15:36:22 -- common/autotest_common.sh@10 -- # set +x 00:09:39.384 ************************************ 00:09:39.384 START TEST version 00:09:39.384 ************************************ 00:09:39.384 15:36:22 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:39.384 * Looking for test storage... 00:09:39.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:39.384 15:36:22 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:39.384 15:36:22 version -- common/autotest_common.sh@1711 -- # lcov --version 00:09:39.384 15:36:22 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:39.384 15:36:22 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:39.384 15:36:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.384 15:36:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.384 15:36:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.384 15:36:22 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.384 15:36:22 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.384 15:36:22 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.384 15:36:22 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.384 15:36:22 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.384 15:36:22 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.384 15:36:22 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.384 15:36:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.384 15:36:22 version -- scripts/common.sh@344 -- # case "$op" in 00:09:39.384 15:36:22 version -- scripts/common.sh@345 -- # : 1 00:09:39.384 15:36:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.384 15:36:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.384 15:36:22 version -- scripts/common.sh@365 -- # decimal 1 00:09:39.384 15:36:22 version -- scripts/common.sh@353 -- # local d=1 00:09:39.384 15:36:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.384 15:36:22 version -- scripts/common.sh@355 -- # echo 1 00:09:39.384 15:36:22 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.384 15:36:22 version -- scripts/common.sh@366 -- # decimal 2 00:09:39.384 15:36:22 version -- scripts/common.sh@353 -- # local d=2 00:09:39.384 15:36:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.384 15:36:22 version -- scripts/common.sh@355 -- # echo 2 00:09:39.384 15:36:22 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.384 15:36:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.384 15:36:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.384 15:36:22 version -- scripts/common.sh@368 -- # return 0 00:09:39.384 15:36:22 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.384 15:36:22 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:39.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.384 --rc genhtml_branch_coverage=1 00:09:39.384 --rc genhtml_function_coverage=1 00:09:39.384 --rc genhtml_legend=1 00:09:39.384 --rc geninfo_all_blocks=1 00:09:39.384 --rc geninfo_unexecuted_blocks=1 00:09:39.384 00:09:39.384 ' 00:09:39.384 15:36:22 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:39.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.384 --rc genhtml_branch_coverage=1 00:09:39.384 --rc genhtml_function_coverage=1 00:09:39.384 --rc genhtml_legend=1 00:09:39.384 --rc geninfo_all_blocks=1 00:09:39.384 --rc geninfo_unexecuted_blocks=1 00:09:39.384 00:09:39.384 ' 00:09:39.384 15:36:22 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:39.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.384 --rc genhtml_branch_coverage=1 00:09:39.384 --rc genhtml_function_coverage=1 00:09:39.384 --rc genhtml_legend=1 00:09:39.384 --rc geninfo_all_blocks=1 00:09:39.384 --rc geninfo_unexecuted_blocks=1 00:09:39.384 00:09:39.384 ' 00:09:39.384 15:36:22 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:39.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.384 --rc genhtml_branch_coverage=1 00:09:39.384 --rc genhtml_function_coverage=1 00:09:39.384 --rc genhtml_legend=1 00:09:39.384 --rc geninfo_all_blocks=1 00:09:39.384 --rc geninfo_unexecuted_blocks=1 00:09:39.384 00:09:39.384 ' 00:09:39.384 15:36:22 version -- app/version.sh@17 -- # get_header_version major 00:09:39.384 15:36:22 version -- app/version.sh@14 -- # tr -d '"' 00:09:39.384 15:36:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:39.384 15:36:22 version -- app/version.sh@14 -- # cut -f2 00:09:39.384 15:36:22 version -- app/version.sh@17 -- # major=25 00:09:39.384 15:36:22 version -- app/version.sh@18 -- # get_header_version minor 00:09:39.384 15:36:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:39.384 15:36:22 version -- app/version.sh@14 -- # tr -d '"' 00:09:39.384 15:36:22 version -- app/version.sh@14 -- # cut -f2 00:09:39.384 15:36:22 version -- app/version.sh@18 -- # minor=1 00:09:39.384 15:36:22 version -- app/version.sh@19 -- # get_header_version patch 00:09:39.384 15:36:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:39.384 15:36:22 version -- app/version.sh@14 -- # cut -f2 00:09:39.384 15:36:22 version -- app/version.sh@14 -- # tr -d '"' 00:09:39.384 15:36:22 version -- app/version.sh@19 -- # patch=0 00:09:39.384 15:36:22 version -- app/version.sh@20 -- # get_header_version suffix 00:09:39.384 15:36:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:39.384 15:36:22 version -- app/version.sh@14 -- # tr -d '"' 00:09:39.384 15:36:22 version -- app/version.sh@14 -- # cut -f2 00:09:39.384 15:36:22 version -- app/version.sh@20 -- # suffix=-pre 00:09:39.384 15:36:22 version -- app/version.sh@22 -- # version=25.1 00:09:39.384 15:36:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:39.384 15:36:22 version -- app/version.sh@28 -- # version=25.1rc0 00:09:39.384 15:36:22 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:39.384 15:36:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:39.384 15:36:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:39.385 15:36:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:39.385 ************************************ 00:09:39.385 END TEST version 00:09:39.385 ************************************ 00:09:39.385 00:09:39.385 real 0m0.343s 00:09:39.385 user 0m0.199s 00:09:39.385 sys 0m0.206s 00:09:39.385 15:36:22 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.385 15:36:22 version -- common/autotest_common.sh@10 -- # set +x 00:09:39.385 15:36:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:39.385 15:36:22 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:09:39.385 15:36:22 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:39.385 15:36:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.385 15:36:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.385 15:36:22 -- common/autotest_common.sh@10 -- # set +x 00:09:39.385 ************************************ 00:09:39.385 START TEST bdev_raid 00:09:39.385 ************************************ 00:09:39.385 15:36:22 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:39.644 * Looking for test storage... 00:09:39.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:39.644 15:36:22 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:39.644 15:36:22 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:09:39.644 15:36:22 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:39.644 15:36:22 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@345 -- # : 1 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.644 15:36:22 bdev_raid -- scripts/common.sh@368 -- # return 0 00:09:39.644 15:36:22 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.644 15:36:22 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:39.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.644 --rc genhtml_branch_coverage=1 00:09:39.644 --rc genhtml_function_coverage=1 00:09:39.644 --rc genhtml_legend=1 00:09:39.644 --rc geninfo_all_blocks=1 00:09:39.644 --rc geninfo_unexecuted_blocks=1 00:09:39.644 00:09:39.644 ' 00:09:39.644 15:36:22 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:39.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.644 --rc genhtml_branch_coverage=1 00:09:39.644 --rc genhtml_function_coverage=1 00:09:39.644 --rc genhtml_legend=1 00:09:39.644 --rc geninfo_all_blocks=1 00:09:39.644 --rc geninfo_unexecuted_blocks=1 00:09:39.644 00:09:39.644 ' 00:09:39.644 15:36:22 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:39.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.644 --rc genhtml_branch_coverage=1 00:09:39.644 --rc genhtml_function_coverage=1 00:09:39.644 --rc genhtml_legend=1 00:09:39.644 --rc geninfo_all_blocks=1 00:09:39.644 --rc geninfo_unexecuted_blocks=1 00:09:39.644 00:09:39.644 ' 00:09:39.644 15:36:22 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:39.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.644 --rc genhtml_branch_coverage=1 00:09:39.644 --rc genhtml_function_coverage=1 00:09:39.644 --rc genhtml_legend=1 00:09:39.644 --rc geninfo_all_blocks=1 00:09:39.644 --rc geninfo_unexecuted_blocks=1 00:09:39.644 00:09:39.644 ' 00:09:39.644 15:36:22 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:39.644 15:36:22 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:39.644 15:36:22 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:09:39.644 15:36:22 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:09:39.644 15:36:22 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:09:39.644 15:36:22 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:09:39.644 15:36:22 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:09:39.644 15:36:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.644 15:36:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.644 15:36:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:39.644 ************************************ 00:09:39.644 START TEST raid1_resize_data_offset_test 00:09:39.644 ************************************ 00:09:39.644 15:36:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:09:39.644 15:36:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60036 00:09:39.644 15:36:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:39.644 15:36:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60036' 00:09:39.644 Process raid pid: 60036 00:09:39.644 15:36:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60036 00:09:39.645 15:36:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60036 ']' 00:09:39.645 15:36:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.645 15:36:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.645 15:36:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.645 15:36:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.645 15:36:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.903 [2024-12-06 15:36:22.981020] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:09:39.903 [2024-12-06 15:36:22.981191] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.903 [2024-12-06 15:36:23.171957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.162 [2024-12-06 15:36:23.318636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.421 [2024-12-06 15:36:23.568653] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.421 [2024-12-06 15:36:23.568722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.682 15:36:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.682 15:36:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:09:40.682 15:36:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:09:40.682 15:36:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.682 15:36:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.682 malloc0 00:09:40.682 15:36:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.682 15:36:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:09:40.682 15:36:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.682 15:36:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.941 malloc1 00:09:40.941 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.941 15:36:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:09:40.941 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.941 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.941 null0 00:09:40.941 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.941 15:36:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:09:40.941 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.941 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.941 [2024-12-06 15:36:24.073368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:09:40.941 [2024-12-06 15:36:24.076110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:40.941 [2024-12-06 15:36:24.076188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:09:40.941 [2024-12-06 15:36:24.076434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:40.941 [2024-12-06 15:36:24.076452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:09:40.941 [2024-12-06 15:36:24.076881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:40.941 [2024-12-06 15:36:24.077096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:40.941 [2024-12-06 15:36:24.077112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:40.941 [2024-12-06 15:36:24.077396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.941 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.941 15:36:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.941 15:36:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:40.941 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.941 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.941 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.941 15:36:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:09:40.942 15:36:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:09:40.942 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.942 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.942 [2024-12-06 15:36:24.129381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:09:40.942 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.942 15:36:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:09:40.942 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.942 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.876 malloc2 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.876 [2024-12-06 15:36:24.807018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:41.876 [2024-12-06 15:36:24.828269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.876 [2024-12-06 15:36:24.831198] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60036 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60036 ']' 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60036 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:09:41.876 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.877 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60036 00:09:41.877 killing process with pid 60036 00:09:41.877 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.877 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.877 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60036' 00:09:41.877 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60036 00:09:41.877 15:36:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60036 00:09:41.877 [2024-12-06 15:36:24.915275] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.877 [2024-12-06 15:36:24.916109] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:09:41.877 [2024-12-06 15:36:24.916189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.877 [2024-12-06 15:36:24.916212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:09:41.877 [2024-12-06 15:36:24.956610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.877 [2024-12-06 15:36:24.957034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.877 [2024-12-06 15:36:24.957058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:43.855 [2024-12-06 15:36:26.958291] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:45.229 15:36:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:09:45.229 00:09:45.229 real 0m5.381s 00:09:45.229 user 0m5.061s 00:09:45.229 sys 0m0.843s 00:09:45.229 15:36:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.229 ************************************ 00:09:45.229 END TEST raid1_resize_data_offset_test 00:09:45.229 ************************************ 00:09:45.229 15:36:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.229 15:36:28 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:09:45.229 15:36:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.229 15:36:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.229 15:36:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.229 ************************************ 00:09:45.229 START TEST raid0_resize_superblock_test 00:09:45.229 ************************************ 00:09:45.229 15:36:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:09:45.229 15:36:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:09:45.229 15:36:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:45.229 15:36:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60129 00:09:45.229 Process raid pid: 60129 00:09:45.229 15:36:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60129' 00:09:45.229 15:36:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60129 00:09:45.229 15:36:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60129 ']' 00:09:45.229 15:36:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.229 15:36:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.229 15:36:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.229 15:36:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.229 15:36:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.229 [2024-12-06 15:36:28.421827] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:09:45.229 [2024-12-06 15:36:28.422964] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.488 [2024-12-06 15:36:28.613787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.488 [2024-12-06 15:36:28.766886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.746 [2024-12-06 15:36:29.018275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.746 [2024-12-06 15:36:29.018669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.312 15:36:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.312 15:36:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:46.312 15:36:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:46.312 15:36:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.312 15:36:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.879 malloc0 00:09:46.879 15:36:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.879 15:36:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:46.879 15:36:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.879 15:36:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.879 [2024-12-06 15:36:29.983461] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:46.879 [2024-12-06 15:36:29.983582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.879 [2024-12-06 15:36:29.983616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:46.879 [2024-12-06 15:36:29.983633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.879 [2024-12-06 15:36:29.987076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.879 [2024-12-06 15:36:29.987309] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:46.879 pt0 00:09:46.879 15:36:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.879 15:36:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:46.879 15:36:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.879 15:36:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.879 94929d3c-f61a-4c7c-a10a-e81c0192f1d3 00:09:46.879 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.879 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:46.879 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.879 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.138 83dfa8f0-a151-43c5-9045-b3dfb205fe22 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.138 56193022-ce3b-4a73-90dc-9e00c7db852f 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.138 [2024-12-06 15:36:30.190717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 83dfa8f0-a151-43c5-9045-b3dfb205fe22 is claimed 00:09:47.138 [2024-12-06 15:36:30.190913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 56193022-ce3b-4a73-90dc-9e00c7db852f is claimed 00:09:47.138 [2024-12-06 15:36:30.191085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:47.138 [2024-12-06 15:36:30.191107] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:09:47.138 [2024-12-06 15:36:30.191481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:47.138 [2024-12-06 15:36:30.191796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:47.138 [2024-12-06 15:36:30.191811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:47.138 [2024-12-06 15:36:30.192044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.138 [2024-12-06 15:36:30.290805] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.138 [2024-12-06 15:36:30.334815] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:47.138 [2024-12-06 15:36:30.334873] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '83dfa8f0-a151-43c5-9045-b3dfb205fe22' was resized: old size 131072, new size 204800 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.138 [2024-12-06 15:36:30.346720] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:47.138 [2024-12-06 15:36:30.346776] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '56193022-ce3b-4a73-90dc-9e00c7db852f' was resized: old size 131072, new size 204800 00:09:47.138 [2024-12-06 15:36:30.346825] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:47.138 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:09:47.398 [2024-12-06 15:36:30.450546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.398 [2024-12-06 15:36:30.478289] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:47.398 [2024-12-06 15:36:30.478406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:47.398 [2024-12-06 15:36:30.478429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.398 [2024-12-06 15:36:30.478449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:47.398 [2024-12-06 15:36:30.478621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.398 [2024-12-06 15:36:30.478664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.398 [2024-12-06 15:36:30.478688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.398 [2024-12-06 15:36:30.490167] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:47.398 [2024-12-06 15:36:30.490500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.398 [2024-12-06 15:36:30.490555] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:47.398 [2024-12-06 15:36:30.490574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.398 [2024-12-06 15:36:30.494015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.398 [2024-12-06 15:36:30.494087] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:47.398 pt0 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.398 [2024-12-06 15:36:30.496432] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 83dfa8f0-a151-43c5-9045-b3dfb205fe22 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.398 [2024-12-06 15:36:30.496538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 83dfa8f0-a151-43c5-9045-b3dfb205fe22 is claimed 00:09:47.398 [2024-12-06 15:36:30.496676] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 56193022-ce3b-4a73-90dc-9e00c7db852f 00:09:47.398 [2024-12-06 15:36:30.496702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 56193022-ce3b-4a73-90dc-9e00c7db852f is claimed 00:09:47.398 [2024-12-06 15:36:30.496864] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 56193022-ce3b-4a73-90dc-9e00c7db852f (2) smaller than existing raid bdev Raid (3) 00:09:47.398 [2024-12-06 15:36:30.496898] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 83dfa8f0-a151-43c5-9045-b3dfb205fe22: File exists 00:09:47.398 [2024-12-06 15:36:30.496955] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:47.398 [2024-12-06 15:36:30.496990] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:09:47.398 [2024-12-06 15:36:30.497336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:47.398 [2024-12-06 15:36:30.497556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:47.398 [2024-12-06 15:36:30.497575] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:09:47.398 [2024-12-06 15:36:30.497791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.398 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:09:47.399 [2024-12-06 15:36:30.514601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60129 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60129 ']' 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60129 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60129 00:09:47.399 killing process with pid 60129 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60129' 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60129 00:09:47.399 [2024-12-06 15:36:30.584960] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.399 15:36:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60129 00:09:47.399 [2024-12-06 15:36:30.585089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.399 [2024-12-06 15:36:30.585151] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.399 [2024-12-06 15:36:30.585163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:09:49.299 [2024-12-06 15:36:32.198282] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:50.237 15:36:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:50.237 00:09:50.237 real 0m5.169s 00:09:50.237 user 0m5.172s 00:09:50.237 sys 0m0.824s 00:09:50.237 15:36:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.237 ************************************ 00:09:50.237 END TEST raid0_resize_superblock_test 00:09:50.237 ************************************ 00:09:50.237 15:36:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.496 15:36:33 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:09:50.496 15:36:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.496 15:36:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.496 15:36:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:50.496 ************************************ 00:09:50.496 START TEST raid1_resize_superblock_test 00:09:50.496 ************************************ 00:09:50.496 15:36:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:09:50.496 15:36:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:09:50.496 15:36:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60229 00:09:50.496 15:36:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:50.496 Process raid pid: 60229 00:09:50.496 15:36:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60229' 00:09:50.497 15:36:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60229 00:09:50.497 15:36:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60229 ']' 00:09:50.497 15:36:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.497 15:36:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.497 15:36:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.497 15:36:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.497 15:36:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.497 [2024-12-06 15:36:33.667314] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:09:50.497 [2024-12-06 15:36:33.667480] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.756 [2024-12-06 15:36:33.859023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.756 [2024-12-06 15:36:34.012996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.015 [2024-12-06 15:36:34.259958] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.015 [2024-12-06 15:36:34.260032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.274 15:36:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.274 15:36:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:51.274 15:36:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:51.274 15:36:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.274 15:36:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.210 malloc0 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.210 [2024-12-06 15:36:35.209164] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:52.210 [2024-12-06 15:36:35.209269] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.210 [2024-12-06 15:36:35.209304] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:52.210 [2024-12-06 15:36:35.209322] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.210 [2024-12-06 15:36:35.212731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.210 [2024-12-06 15:36:35.212965] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:52.210 pt0 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.210 9e51b5c6-9df3-4774-a7c1-fb1c1700904c 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.210 b56dc44c-0217-4b1f-80f4-c0624f85a1bd 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.210 e05b2c0e-2e72-4aa7-95f7-fa6e7764580b 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.210 [2024-12-06 15:36:35.416395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b56dc44c-0217-4b1f-80f4-c0624f85a1bd is claimed 00:09:52.210 [2024-12-06 15:36:35.416907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e05b2c0e-2e72-4aa7-95f7-fa6e7764580b is claimed 00:09:52.210 [2024-12-06 15:36:35.417119] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:52.210 [2024-12-06 15:36:35.417142] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:09:52.210 [2024-12-06 15:36:35.417548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:52.210 [2024-12-06 15:36:35.417816] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:52.210 [2024-12-06 15:36:35.417830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:52.210 [2024-12-06 15:36:35.418062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.210 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.468 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:52.468 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:52.468 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:09:52.468 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:52.468 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.469 [2024-12-06 15:36:35.524469] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.469 [2024-12-06 15:36:35.564446] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:52.469 [2024-12-06 15:36:35.564690] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b56dc44c-0217-4b1f-80f4-c0624f85a1bd' was resized: old size 131072, new size 204800 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.469 [2024-12-06 15:36:35.576396] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:52.469 [2024-12-06 15:36:35.576662] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e05b2c0e-2e72-4aa7-95f7-fa6e7764580b' was resized: old size 131072, new size 204800 00:09:52.469 [2024-12-06 15:36:35.576725] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:09:52.469 [2024-12-06 15:36:35.672239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.469 [2024-12-06 15:36:35.711991] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:52.469 [2024-12-06 15:36:35.712109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:52.469 [2024-12-06 15:36:35.712149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:52.469 [2024-12-06 15:36:35.712351] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.469 [2024-12-06 15:36:35.712622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.469 [2024-12-06 15:36:35.712705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.469 [2024-12-06 15:36:35.712723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.469 [2024-12-06 15:36:35.723849] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:52.469 [2024-12-06 15:36:35.723962] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.469 [2024-12-06 15:36:35.723993] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:52.469 [2024-12-06 15:36:35.724009] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.469 [2024-12-06 15:36:35.727199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.469 [2024-12-06 15:36:35.727269] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:52.469 pt0 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.469 [2024-12-06 15:36:35.729367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b56dc44c-0217-4b1f-80f4-c0624f85a1bd 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:52.469 [2024-12-06 15:36:35.729456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b56dc44c-0217-4b1f-80f4-c0624f85a1bd is claimed 00:09:52.469 [2024-12-06 15:36:35.729587] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e05b2c0e-2e72-4aa7-95f7-fa6e7764580b 00:09:52.469 [2024-12-06 15:36:35.729612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e05b2c0e-2e72-4aa7-95f7-fa6e7764580b is claimed 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.469 [2024-12-06 15:36:35.729753] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev e05b2c0e-2e72-4aa7-95f7-fa6e7764580b (2) smaller than existing raid bdev Raid (3) 00:09:52.469 [2024-12-06 15:36:35.729782] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev b56dc44c-0217-4b1f-80f4-c0624f85a1bd: File exists 00:09:52.469 [2024-12-06 15:36:35.729830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:52.469 [2024-12-06 15:36:35.729847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.469 [2024-12-06 15:36:35.730158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:52.469 [2024-12-06 15:36:35.730327] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:52.469 [2024-12-06 15:36:35.730337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:09:52.469 [2024-12-06 15:36:35.730534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.469 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.469 [2024-12-06 15:36:35.752095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.727 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.727 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:52.727 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:52.727 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:09:52.727 15:36:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60229 00:09:52.727 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60229 ']' 00:09:52.727 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60229 00:09:52.727 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:52.727 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.727 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60229 00:09:52.727 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.727 killing process with pid 60229 00:09:52.727 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.727 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60229' 00:09:52.727 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60229 00:09:52.727 [2024-12-06 15:36:35.842879] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.727 15:36:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60229 00:09:52.727 [2024-12-06 15:36:35.843014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.727 [2024-12-06 15:36:35.843089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.727 [2024-12-06 15:36:35.843102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:09:54.628 [2024-12-06 15:36:37.427612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.564 ************************************ 00:09:55.564 END TEST raid1_resize_superblock_test 00:09:55.564 ************************************ 00:09:55.564 15:36:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:55.564 00:09:55.564 real 0m5.144s 00:09:55.564 user 0m5.186s 00:09:55.564 sys 0m0.826s 00:09:55.564 15:36:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.564 15:36:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.564 15:36:38 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:09:55.564 15:36:38 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:09:55.564 15:36:38 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:09:55.564 15:36:38 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:09:55.564 15:36:38 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:09:55.564 15:36:38 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:55.564 15:36:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:55.564 15:36:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.564 15:36:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.564 ************************************ 00:09:55.564 START TEST raid_function_test_raid0 00:09:55.564 ************************************ 00:09:55.564 15:36:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:09:55.564 15:36:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:09:55.564 15:36:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:55.564 15:36:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:55.564 15:36:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60337 00:09:55.564 15:36:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:55.564 15:36:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60337' 00:09:55.564 Process raid pid: 60337 00:09:55.564 15:36:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60337 00:09:55.564 15:36:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60337 ']' 00:09:55.564 15:36:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.564 15:36:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.564 15:36:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.564 15:36:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.564 15:36:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:55.823 [2024-12-06 15:36:38.912235] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:09:55.823 [2024-12-06 15:36:38.913782] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.823 [2024-12-06 15:36:39.102016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.082 [2024-12-06 15:36:39.248575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.341 [2024-12-06 15:36:39.503677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.341 [2024-12-06 15:36:39.504005] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:56.599 Base_1 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:56.599 Base_2 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:56.599 [2024-12-06 15:36:39.877556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:56.599 [2024-12-06 15:36:39.880119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:56.599 [2024-12-06 15:36:39.880202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:56.599 [2024-12-06 15:36:39.880218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:56.599 [2024-12-06 15:36:39.880558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:56.599 [2024-12-06 15:36:39.880727] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:56.599 [2024-12-06 15:36:39.880738] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:09:56.599 [2024-12-06 15:36:39.880921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.599 15:36:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:56.858 15:36:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.858 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:56.858 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:56.858 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:56.858 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:56.858 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:56.858 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:56.858 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:56.858 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:56.858 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:56.858 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:56.858 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:56.858 15:36:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:56.858 [2024-12-06 15:36:40.117301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:56.858 /dev/nbd0 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:57.117 1+0 records in 00:09:57.117 1+0 records out 00:09:57.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236209 s, 17.3 MB/s 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:57.117 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:57.432 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:57.432 { 00:09:57.432 "nbd_device": "/dev/nbd0", 00:09:57.432 "bdev_name": "raid" 00:09:57.432 } 00:09:57.432 ]' 00:09:57.432 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:57.432 { 00:09:57.432 "nbd_device": "/dev/nbd0", 00:09:57.432 "bdev_name": "raid" 00:09:57.432 } 00:09:57.432 ]' 00:09:57.432 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:57.432 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:57.432 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:57.432 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:57.432 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:57.432 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:57.433 4096+0 records in 00:09:57.433 4096+0 records out 00:09:57.433 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0390196 s, 53.7 MB/s 00:09:57.433 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:57.691 4096+0 records in 00:09:57.691 4096+0 records out 00:09:57.691 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.203484 s, 10.3 MB/s 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:57.691 128+0 records in 00:09:57.691 128+0 records out 00:09:57.691 65536 bytes (66 kB, 64 KiB) copied, 0.00178238 s, 36.8 MB/s 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:57.691 2035+0 records in 00:09:57.691 2035+0 records out 00:09:57.691 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0212549 s, 49.0 MB/s 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:57.691 456+0 records in 00:09:57.691 456+0 records out 00:09:57.691 233472 bytes (233 kB, 228 KiB) copied, 0.00624709 s, 37.4 MB/s 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:57.691 15:36:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:57.950 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:57.950 [2024-12-06 15:36:41.128227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.950 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:57.950 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:57.950 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.950 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.950 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:57.950 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:57.950 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.950 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:57.950 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:57.950 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60337 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60337 ']' 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60337 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60337 00:09:58.209 killing process with pid 60337 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60337' 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60337 00:09:58.209 [2024-12-06 15:36:41.483004] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.209 15:36:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60337 00:09:58.209 [2024-12-06 15:36:41.483141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.209 [2024-12-06 15:36:41.483203] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.209 [2024-12-06 15:36:41.483223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:58.467 [2024-12-06 15:36:41.715584] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.844 15:36:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:09:59.844 00:09:59.844 real 0m4.176s 00:09:59.844 user 0m4.653s 00:09:59.844 sys 0m1.217s 00:09:59.844 15:36:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.844 ************************************ 00:09:59.844 END TEST raid_function_test_raid0 00:09:59.844 ************************************ 00:09:59.844 15:36:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:59.844 15:36:43 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:09:59.844 15:36:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.844 15:36:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.844 15:36:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.844 ************************************ 00:09:59.844 START TEST raid_function_test_concat 00:09:59.844 ************************************ 00:09:59.844 15:36:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:09:59.844 15:36:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:09:59.844 15:36:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:59.844 15:36:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:59.844 15:36:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60466 00:09:59.844 15:36:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:59.844 Process raid pid: 60466 00:09:59.844 15:36:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60466' 00:09:59.844 15:36:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60466 00:09:59.844 15:36:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60466 ']' 00:09:59.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.844 15:36:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.844 15:36:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.844 15:36:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.844 15:36:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.844 15:36:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:00.103 [2024-12-06 15:36:43.163873] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:10:00.103 [2024-12-06 15:36:43.164279] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.103 [2024-12-06 15:36:43.349253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.362 [2024-12-06 15:36:43.497825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.620 [2024-12-06 15:36:43.746724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.620 [2024-12-06 15:36:43.747027] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:00.879 Base_1 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:00.879 Base_2 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:00.879 [2024-12-06 15:36:44.131208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:00.879 [2024-12-06 15:36:44.134288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:00.879 [2024-12-06 15:36:44.134396] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:00.879 [2024-12-06 15:36:44.134413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:00.879 [2024-12-06 15:36:44.134953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:00.879 [2024-12-06 15:36:44.135206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:00.879 [2024-12-06 15:36:44.135221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:10:00.879 [2024-12-06 15:36:44.135548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:10:00.879 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.138 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:10:01.138 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:10:01.138 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:10:01.138 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:01.138 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:10:01.138 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:01.138 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:01.138 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:01.138 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:10:01.138 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:01.138 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:01.138 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:10:01.138 [2024-12-06 15:36:44.407205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:01.138 /dev/nbd0 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:01.396 1+0 records in 00:10:01.396 1+0 records out 00:10:01.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000747904 s, 5.5 MB/s 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:01.396 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:01.654 { 00:10:01.654 "nbd_device": "/dev/nbd0", 00:10:01.654 "bdev_name": "raid" 00:10:01.654 } 00:10:01.654 ]' 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:01.654 { 00:10:01.654 "nbd_device": "/dev/nbd0", 00:10:01.654 "bdev_name": "raid" 00:10:01.654 } 00:10:01.654 ]' 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:10:01.654 4096+0 records in 00:10:01.654 4096+0 records out 00:10:01.654 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0363924 s, 57.6 MB/s 00:10:01.654 15:36:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:10:01.913 4096+0 records in 00:10:01.913 4096+0 records out 00:10:01.913 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.246334 s, 8.5 MB/s 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:10:01.913 128+0 records in 00:10:01.913 128+0 records out 00:10:01.913 65536 bytes (66 kB, 64 KiB) copied, 0.00168526 s, 38.9 MB/s 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:10:01.913 2035+0 records in 00:10:01.913 2035+0 records out 00:10:01.913 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.02106 s, 49.5 MB/s 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:10:01.913 456+0 records in 00:10:01.913 456+0 records out 00:10:01.913 233472 bytes (233 kB, 228 KiB) copied, 0.00316972 s, 73.7 MB/s 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:10:01.913 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:02.170 [2024-12-06 15:36:45.458519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:02.170 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:02.428 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:10:02.428 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:10:02.428 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:10:02.428 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:02.428 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:02.686 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60466 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60466 ']' 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60466 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60466 00:10:02.687 killing process with pid 60466 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60466' 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60466 00:10:02.687 [2024-12-06 15:36:45.918374] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.687 15:36:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60466 00:10:02.687 [2024-12-06 15:36:45.918523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.687 [2024-12-06 15:36:45.918594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.687 [2024-12-06 15:36:45.918611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:10:02.945 [2024-12-06 15:36:46.152183] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.322 15:36:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:10:04.323 00:10:04.323 real 0m4.344s 00:10:04.323 user 0m4.919s 00:10:04.323 sys 0m1.276s 00:10:04.323 15:36:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.323 15:36:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:04.323 ************************************ 00:10:04.323 END TEST raid_function_test_concat 00:10:04.323 ************************************ 00:10:04.323 15:36:47 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:10:04.323 15:36:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.323 15:36:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.323 15:36:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.323 ************************************ 00:10:04.323 START TEST raid0_resize_test 00:10:04.323 ************************************ 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60601 00:10:04.323 Process raid pid: 60601 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60601' 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60601 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60601 ']' 00:10:04.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.323 15:36:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.323 [2024-12-06 15:36:47.588334] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:10:04.323 [2024-12-06 15:36:47.588494] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.581 [2024-12-06 15:36:47.779809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.839 [2024-12-06 15:36:47.925892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.096 [2024-12-06 15:36:48.176482] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.096 [2024-12-06 15:36:48.176834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.355 Base_1 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.355 Base_2 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.355 [2024-12-06 15:36:48.477831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:05.355 [2024-12-06 15:36:48.480178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:05.355 [2024-12-06 15:36:48.480241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:05.355 [2024-12-06 15:36:48.480255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:05.355 [2024-12-06 15:36:48.480559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:05.355 [2024-12-06 15:36:48.480696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:05.355 [2024-12-06 15:36:48.480706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:05.355 [2024-12-06 15:36:48.480859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.355 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.355 [2024-12-06 15:36:48.485786] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:05.356 [2024-12-06 15:36:48.485817] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:05.356 true 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.356 [2024-12-06 15:36:48.501963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.356 [2024-12-06 15:36:48.545728] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:05.356 [2024-12-06 15:36:48.545865] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:05.356 [2024-12-06 15:36:48.546016] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:10:05.356 true 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:10:05.356 [2024-12-06 15:36:48.557867] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60601 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60601 ']' 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60601 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.356 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60601 00:10:05.614 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.614 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.614 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60601' 00:10:05.614 killing process with pid 60601 00:10:05.614 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60601 00:10:05.614 [2024-12-06 15:36:48.652867] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:05.614 15:36:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60601 00:10:05.614 [2024-12-06 15:36:48.653126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.614 [2024-12-06 15:36:48.653349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.614 [2024-12-06 15:36:48.653396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:05.614 [2024-12-06 15:36:48.672656] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.077 15:36:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:07.077 00:10:07.077 real 0m2.441s 00:10:07.077 user 0m2.492s 00:10:07.077 sys 0m0.488s 00:10:07.077 15:36:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.077 15:36:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.077 ************************************ 00:10:07.077 END TEST raid0_resize_test 00:10:07.077 ************************************ 00:10:07.077 15:36:49 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:10:07.077 15:36:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.077 15:36:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.077 15:36:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.077 ************************************ 00:10:07.077 START TEST raid1_resize_test 00:10:07.077 ************************************ 00:10:07.077 15:36:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:10:07.077 15:36:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60662 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60662' 00:10:07.077 Process raid pid: 60662 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60662 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60662 ']' 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.077 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.077 [2024-12-06 15:36:50.106568] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:10:07.077 [2024-12-06 15:36:50.106735] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.077 [2024-12-06 15:36:50.295073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.335 [2024-12-06 15:36:50.447391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.593 [2024-12-06 15:36:50.690944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.593 [2024-12-06 15:36:50.691010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.851 Base_1 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.851 Base_2 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.851 [2024-12-06 15:36:50.985866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:07.851 [2024-12-06 15:36:50.988206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:07.851 [2024-12-06 15:36:50.988280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:07.851 [2024-12-06 15:36:50.988294] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:07.851 [2024-12-06 15:36:50.988594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:07.851 [2024-12-06 15:36:50.988722] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:07.851 [2024-12-06 15:36:50.988732] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:07.851 [2024-12-06 15:36:50.988923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.851 [2024-12-06 15:36:50.993827] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:07.851 [2024-12-06 15:36:50.993861] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:07.851 true 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:10:07.851 15:36:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.851 [2024-12-06 15:36:51.010004] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.851 [2024-12-06 15:36:51.053785] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:07.851 [2024-12-06 15:36:51.053817] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:07.851 [2024-12-06 15:36:51.053860] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:10:07.851 true 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.851 [2024-12-06 15:36:51.069941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60662 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60662 ']' 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60662 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.851 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60662 00:10:08.109 killing process with pid 60662 00:10:08.109 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.109 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.109 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60662' 00:10:08.109 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60662 00:10:08.109 [2024-12-06 15:36:51.153489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.109 15:36:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60662 00:10:08.109 [2024-12-06 15:36:51.153636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.109 [2024-12-06 15:36:51.154235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.109 [2024-12-06 15:36:51.154263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:08.109 [2024-12-06 15:36:51.172393] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.483 15:36:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:09.483 00:10:09.483 real 0m2.427s 00:10:09.483 user 0m2.471s 00:10:09.483 sys 0m0.485s 00:10:09.483 15:36:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.483 15:36:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.483 ************************************ 00:10:09.483 END TEST raid1_resize_test 00:10:09.483 ************************************ 00:10:09.483 15:36:52 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:09.483 15:36:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:09.483 15:36:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:10:09.483 15:36:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:09.483 15:36:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.483 15:36:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.483 ************************************ 00:10:09.483 START TEST raid_state_function_test 00:10:09.483 ************************************ 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60725 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:09.483 Process raid pid: 60725 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60725' 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60725 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60725 ']' 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.483 15:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.483 [2024-12-06 15:36:52.612371] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:10:09.483 [2024-12-06 15:36:52.612549] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.741 [2024-12-06 15:36:52.800446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.741 [2024-12-06 15:36:52.948382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.999 [2024-12-06 15:36:53.218171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.999 [2024-12-06 15:36:53.218238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.257 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.257 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:10.257 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:10.257 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.257 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.257 [2024-12-06 15:36:53.469108] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.257 [2024-12-06 15:36:53.469183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.257 [2024-12-06 15:36:53.469197] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.257 [2024-12-06 15:36:53.469211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.257 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.257 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.258 "name": "Existed_Raid", 00:10:10.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.258 "strip_size_kb": 64, 00:10:10.258 "state": "configuring", 00:10:10.258 "raid_level": "raid0", 00:10:10.258 "superblock": false, 00:10:10.258 "num_base_bdevs": 2, 00:10:10.258 "num_base_bdevs_discovered": 0, 00:10:10.258 "num_base_bdevs_operational": 2, 00:10:10.258 "base_bdevs_list": [ 00:10:10.258 { 00:10:10.258 "name": "BaseBdev1", 00:10:10.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.258 "is_configured": false, 00:10:10.258 "data_offset": 0, 00:10:10.258 "data_size": 0 00:10:10.258 }, 00:10:10.258 { 00:10:10.258 "name": "BaseBdev2", 00:10:10.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.258 "is_configured": false, 00:10:10.258 "data_offset": 0, 00:10:10.258 "data_size": 0 00:10:10.258 } 00:10:10.258 ] 00:10:10.258 }' 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.258 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.825 [2024-12-06 15:36:53.876746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.825 [2024-12-06 15:36:53.876792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.825 [2024-12-06 15:36:53.888712] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.825 [2024-12-06 15:36:53.888774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.825 [2024-12-06 15:36:53.888786] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.825 [2024-12-06 15:36:53.888804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.825 [2024-12-06 15:36:53.942734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.825 BaseBdev1 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.825 [ 00:10:10.825 { 00:10:10.825 "name": "BaseBdev1", 00:10:10.825 "aliases": [ 00:10:10.825 "18b8189f-6569-4e9a-a553-bf84d28b43ef" 00:10:10.825 ], 00:10:10.825 "product_name": "Malloc disk", 00:10:10.825 "block_size": 512, 00:10:10.825 "num_blocks": 65536, 00:10:10.825 "uuid": "18b8189f-6569-4e9a-a553-bf84d28b43ef", 00:10:10.825 "assigned_rate_limits": { 00:10:10.825 "rw_ios_per_sec": 0, 00:10:10.825 "rw_mbytes_per_sec": 0, 00:10:10.825 "r_mbytes_per_sec": 0, 00:10:10.825 "w_mbytes_per_sec": 0 00:10:10.825 }, 00:10:10.825 "claimed": true, 00:10:10.825 "claim_type": "exclusive_write", 00:10:10.825 "zoned": false, 00:10:10.825 "supported_io_types": { 00:10:10.825 "read": true, 00:10:10.825 "write": true, 00:10:10.825 "unmap": true, 00:10:10.825 "flush": true, 00:10:10.825 "reset": true, 00:10:10.825 "nvme_admin": false, 00:10:10.825 "nvme_io": false, 00:10:10.825 "nvme_io_md": false, 00:10:10.825 "write_zeroes": true, 00:10:10.825 "zcopy": true, 00:10:10.825 "get_zone_info": false, 00:10:10.825 "zone_management": false, 00:10:10.825 "zone_append": false, 00:10:10.825 "compare": false, 00:10:10.825 "compare_and_write": false, 00:10:10.825 "abort": true, 00:10:10.825 "seek_hole": false, 00:10:10.825 "seek_data": false, 00:10:10.825 "copy": true, 00:10:10.825 "nvme_iov_md": false 00:10:10.825 }, 00:10:10.825 "memory_domains": [ 00:10:10.825 { 00:10:10.825 "dma_device_id": "system", 00:10:10.825 "dma_device_type": 1 00:10:10.825 }, 00:10:10.825 { 00:10:10.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.825 "dma_device_type": 2 00:10:10.825 } 00:10:10.825 ], 00:10:10.825 "driver_specific": {} 00:10:10.825 } 00:10:10.825 ] 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.825 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:10.826 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:10.826 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.826 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.826 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.826 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.826 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:10.826 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.826 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.826 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.826 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.826 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.826 15:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.826 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.826 15:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.826 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.826 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.826 "name": "Existed_Raid", 00:10:10.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.826 "strip_size_kb": 64, 00:10:10.826 "state": "configuring", 00:10:10.826 "raid_level": "raid0", 00:10:10.826 "superblock": false, 00:10:10.826 "num_base_bdevs": 2, 00:10:10.826 "num_base_bdevs_discovered": 1, 00:10:10.826 "num_base_bdevs_operational": 2, 00:10:10.826 "base_bdevs_list": [ 00:10:10.826 { 00:10:10.826 "name": "BaseBdev1", 00:10:10.826 "uuid": "18b8189f-6569-4e9a-a553-bf84d28b43ef", 00:10:10.826 "is_configured": true, 00:10:10.826 "data_offset": 0, 00:10:10.826 "data_size": 65536 00:10:10.826 }, 00:10:10.826 { 00:10:10.826 "name": "BaseBdev2", 00:10:10.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.826 "is_configured": false, 00:10:10.826 "data_offset": 0, 00:10:10.826 "data_size": 0 00:10:10.826 } 00:10:10.826 ] 00:10:10.826 }' 00:10:10.826 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.826 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.392 [2024-12-06 15:36:54.386392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.392 [2024-12-06 15:36:54.386638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.392 [2024-12-06 15:36:54.398421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.392 [2024-12-06 15:36:54.400978] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.392 [2024-12-06 15:36:54.401137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.392 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.392 "name": "Existed_Raid", 00:10:11.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.392 "strip_size_kb": 64, 00:10:11.392 "state": "configuring", 00:10:11.392 "raid_level": "raid0", 00:10:11.392 "superblock": false, 00:10:11.392 "num_base_bdevs": 2, 00:10:11.392 "num_base_bdevs_discovered": 1, 00:10:11.392 "num_base_bdevs_operational": 2, 00:10:11.392 "base_bdevs_list": [ 00:10:11.392 { 00:10:11.393 "name": "BaseBdev1", 00:10:11.393 "uuid": "18b8189f-6569-4e9a-a553-bf84d28b43ef", 00:10:11.393 "is_configured": true, 00:10:11.393 "data_offset": 0, 00:10:11.393 "data_size": 65536 00:10:11.393 }, 00:10:11.393 { 00:10:11.393 "name": "BaseBdev2", 00:10:11.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.393 "is_configured": false, 00:10:11.393 "data_offset": 0, 00:10:11.393 "data_size": 0 00:10:11.393 } 00:10:11.393 ] 00:10:11.393 }' 00:10:11.393 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.393 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.651 [2024-12-06 15:36:54.833228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.651 [2024-12-06 15:36:54.833558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:11.651 [2024-12-06 15:36:54.833584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:11.651 [2024-12-06 15:36:54.833950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:11.651 [2024-12-06 15:36:54.834165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:11.651 [2024-12-06 15:36:54.834180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:11.651 [2024-12-06 15:36:54.834526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.651 BaseBdev2 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.651 [ 00:10:11.651 { 00:10:11.651 "name": "BaseBdev2", 00:10:11.651 "aliases": [ 00:10:11.651 "c6bf4f66-2d65-4355-b8dc-478a30bbae98" 00:10:11.651 ], 00:10:11.651 "product_name": "Malloc disk", 00:10:11.651 "block_size": 512, 00:10:11.651 "num_blocks": 65536, 00:10:11.651 "uuid": "c6bf4f66-2d65-4355-b8dc-478a30bbae98", 00:10:11.651 "assigned_rate_limits": { 00:10:11.651 "rw_ios_per_sec": 0, 00:10:11.651 "rw_mbytes_per_sec": 0, 00:10:11.651 "r_mbytes_per_sec": 0, 00:10:11.651 "w_mbytes_per_sec": 0 00:10:11.651 }, 00:10:11.651 "claimed": true, 00:10:11.651 "claim_type": "exclusive_write", 00:10:11.651 "zoned": false, 00:10:11.651 "supported_io_types": { 00:10:11.651 "read": true, 00:10:11.651 "write": true, 00:10:11.651 "unmap": true, 00:10:11.651 "flush": true, 00:10:11.651 "reset": true, 00:10:11.651 "nvme_admin": false, 00:10:11.651 "nvme_io": false, 00:10:11.651 "nvme_io_md": false, 00:10:11.651 "write_zeroes": true, 00:10:11.651 "zcopy": true, 00:10:11.651 "get_zone_info": false, 00:10:11.651 "zone_management": false, 00:10:11.651 "zone_append": false, 00:10:11.651 "compare": false, 00:10:11.651 "compare_and_write": false, 00:10:11.651 "abort": true, 00:10:11.651 "seek_hole": false, 00:10:11.651 "seek_data": false, 00:10:11.651 "copy": true, 00:10:11.651 "nvme_iov_md": false 00:10:11.651 }, 00:10:11.651 "memory_domains": [ 00:10:11.651 { 00:10:11.651 "dma_device_id": "system", 00:10:11.651 "dma_device_type": 1 00:10:11.651 }, 00:10:11.651 { 00:10:11.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.651 "dma_device_type": 2 00:10:11.651 } 00:10:11.651 ], 00:10:11.651 "driver_specific": {} 00:10:11.651 } 00:10:11.651 ] 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:11.651 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.652 "name": "Existed_Raid", 00:10:11.652 "uuid": "72a473ca-241f-4310-aafb-f3d438484375", 00:10:11.652 "strip_size_kb": 64, 00:10:11.652 "state": "online", 00:10:11.652 "raid_level": "raid0", 00:10:11.652 "superblock": false, 00:10:11.652 "num_base_bdevs": 2, 00:10:11.652 "num_base_bdevs_discovered": 2, 00:10:11.652 "num_base_bdevs_operational": 2, 00:10:11.652 "base_bdevs_list": [ 00:10:11.652 { 00:10:11.652 "name": "BaseBdev1", 00:10:11.652 "uuid": "18b8189f-6569-4e9a-a553-bf84d28b43ef", 00:10:11.652 "is_configured": true, 00:10:11.652 "data_offset": 0, 00:10:11.652 "data_size": 65536 00:10:11.652 }, 00:10:11.652 { 00:10:11.652 "name": "BaseBdev2", 00:10:11.652 "uuid": "c6bf4f66-2d65-4355-b8dc-478a30bbae98", 00:10:11.652 "is_configured": true, 00:10:11.652 "data_offset": 0, 00:10:11.652 "data_size": 65536 00:10:11.652 } 00:10:11.652 ] 00:10:11.652 }' 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.652 15:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.220 [2024-12-06 15:36:55.276976] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:12.220 "name": "Existed_Raid", 00:10:12.220 "aliases": [ 00:10:12.220 "72a473ca-241f-4310-aafb-f3d438484375" 00:10:12.220 ], 00:10:12.220 "product_name": "Raid Volume", 00:10:12.220 "block_size": 512, 00:10:12.220 "num_blocks": 131072, 00:10:12.220 "uuid": "72a473ca-241f-4310-aafb-f3d438484375", 00:10:12.220 "assigned_rate_limits": { 00:10:12.220 "rw_ios_per_sec": 0, 00:10:12.220 "rw_mbytes_per_sec": 0, 00:10:12.220 "r_mbytes_per_sec": 0, 00:10:12.220 "w_mbytes_per_sec": 0 00:10:12.220 }, 00:10:12.220 "claimed": false, 00:10:12.220 "zoned": false, 00:10:12.220 "supported_io_types": { 00:10:12.220 "read": true, 00:10:12.220 "write": true, 00:10:12.220 "unmap": true, 00:10:12.220 "flush": true, 00:10:12.220 "reset": true, 00:10:12.220 "nvme_admin": false, 00:10:12.220 "nvme_io": false, 00:10:12.220 "nvme_io_md": false, 00:10:12.220 "write_zeroes": true, 00:10:12.220 "zcopy": false, 00:10:12.220 "get_zone_info": false, 00:10:12.220 "zone_management": false, 00:10:12.220 "zone_append": false, 00:10:12.220 "compare": false, 00:10:12.220 "compare_and_write": false, 00:10:12.220 "abort": false, 00:10:12.220 "seek_hole": false, 00:10:12.220 "seek_data": false, 00:10:12.220 "copy": false, 00:10:12.220 "nvme_iov_md": false 00:10:12.220 }, 00:10:12.220 "memory_domains": [ 00:10:12.220 { 00:10:12.220 "dma_device_id": "system", 00:10:12.220 "dma_device_type": 1 00:10:12.220 }, 00:10:12.220 { 00:10:12.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.220 "dma_device_type": 2 00:10:12.220 }, 00:10:12.220 { 00:10:12.220 "dma_device_id": "system", 00:10:12.220 "dma_device_type": 1 00:10:12.220 }, 00:10:12.220 { 00:10:12.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.220 "dma_device_type": 2 00:10:12.220 } 00:10:12.220 ], 00:10:12.220 "driver_specific": { 00:10:12.220 "raid": { 00:10:12.220 "uuid": "72a473ca-241f-4310-aafb-f3d438484375", 00:10:12.220 "strip_size_kb": 64, 00:10:12.220 "state": "online", 00:10:12.220 "raid_level": "raid0", 00:10:12.220 "superblock": false, 00:10:12.220 "num_base_bdevs": 2, 00:10:12.220 "num_base_bdevs_discovered": 2, 00:10:12.220 "num_base_bdevs_operational": 2, 00:10:12.220 "base_bdevs_list": [ 00:10:12.220 { 00:10:12.220 "name": "BaseBdev1", 00:10:12.220 "uuid": "18b8189f-6569-4e9a-a553-bf84d28b43ef", 00:10:12.220 "is_configured": true, 00:10:12.220 "data_offset": 0, 00:10:12.220 "data_size": 65536 00:10:12.220 }, 00:10:12.220 { 00:10:12.220 "name": "BaseBdev2", 00:10:12.220 "uuid": "c6bf4f66-2d65-4355-b8dc-478a30bbae98", 00:10:12.220 "is_configured": true, 00:10:12.220 "data_offset": 0, 00:10:12.220 "data_size": 65536 00:10:12.220 } 00:10:12.220 ] 00:10:12.220 } 00:10:12.220 } 00:10:12.220 }' 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:12.220 BaseBdev2' 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.220 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.478 [2024-12-06 15:36:55.520961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:12.478 [2024-12-06 15:36:55.521006] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.478 [2024-12-06 15:36:55.521079] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.478 "name": "Existed_Raid", 00:10:12.478 "uuid": "72a473ca-241f-4310-aafb-f3d438484375", 00:10:12.478 "strip_size_kb": 64, 00:10:12.478 "state": "offline", 00:10:12.478 "raid_level": "raid0", 00:10:12.478 "superblock": false, 00:10:12.478 "num_base_bdevs": 2, 00:10:12.478 "num_base_bdevs_discovered": 1, 00:10:12.478 "num_base_bdevs_operational": 1, 00:10:12.478 "base_bdevs_list": [ 00:10:12.478 { 00:10:12.478 "name": null, 00:10:12.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.478 "is_configured": false, 00:10:12.478 "data_offset": 0, 00:10:12.478 "data_size": 65536 00:10:12.478 }, 00:10:12.478 { 00:10:12.478 "name": "BaseBdev2", 00:10:12.478 "uuid": "c6bf4f66-2d65-4355-b8dc-478a30bbae98", 00:10:12.478 "is_configured": true, 00:10:12.478 "data_offset": 0, 00:10:12.478 "data_size": 65536 00:10:12.478 } 00:10:12.478 ] 00:10:12.478 }' 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.478 15:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.042 15:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:13.042 15:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:13.042 15:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.042 15:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:13.042 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.042 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.042 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.042 15:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:13.042 15:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:13.042 15:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:13.042 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.042 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.043 [2024-12-06 15:36:56.078992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:13.043 [2024-12-06 15:36:56.079066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60725 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60725 ']' 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60725 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60725 00:10:13.043 killing process with pid 60725 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60725' 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60725 00:10:13.043 [2024-12-06 15:36:56.286831] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.043 15:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60725 00:10:13.043 [2024-12-06 15:36:56.305202] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:14.416 00:10:14.416 real 0m5.071s 00:10:14.416 user 0m7.028s 00:10:14.416 sys 0m0.974s 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.416 ************************************ 00:10:14.416 END TEST raid_state_function_test 00:10:14.416 ************************************ 00:10:14.416 15:36:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:10:14.416 15:36:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:14.416 15:36:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.416 15:36:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.416 ************************************ 00:10:14.416 START TEST raid_state_function_test_sb 00:10:14.416 ************************************ 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60978 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:14.416 Process raid pid: 60978 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60978' 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60978 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60978 ']' 00:10:14.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.416 15:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.674 [2024-12-06 15:36:57.762166] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:10:14.674 [2024-12-06 15:36:57.762326] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.674 [2024-12-06 15:36:57.949690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.933 [2024-12-06 15:36:58.096310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.191 [2024-12-06 15:36:58.346721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.191 [2024-12-06 15:36:58.347014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.449 [2024-12-06 15:36:58.615943] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.449 [2024-12-06 15:36:58.616148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.449 [2024-12-06 15:36:58.616242] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.449 [2024-12-06 15:36:58.616335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.449 15:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.449 "name": "Existed_Raid", 00:10:15.449 "uuid": "26d069a6-4cf5-4e50-8367-213d7801dd32", 00:10:15.449 "strip_size_kb": 64, 00:10:15.449 "state": "configuring", 00:10:15.449 "raid_level": "raid0", 00:10:15.449 "superblock": true, 00:10:15.450 "num_base_bdevs": 2, 00:10:15.450 "num_base_bdevs_discovered": 0, 00:10:15.450 "num_base_bdevs_operational": 2, 00:10:15.450 "base_bdevs_list": [ 00:10:15.450 { 00:10:15.450 "name": "BaseBdev1", 00:10:15.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.450 "is_configured": false, 00:10:15.450 "data_offset": 0, 00:10:15.450 "data_size": 0 00:10:15.450 }, 00:10:15.450 { 00:10:15.450 "name": "BaseBdev2", 00:10:15.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.450 "is_configured": false, 00:10:15.450 "data_offset": 0, 00:10:15.450 "data_size": 0 00:10:15.450 } 00:10:15.450 ] 00:10:15.450 }' 00:10:15.450 15:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.450 15:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.017 [2024-12-06 15:36:59.031733] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.017 [2024-12-06 15:36:59.031781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.017 [2024-12-06 15:36:59.039729] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.017 [2024-12-06 15:36:59.039907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.017 [2024-12-06 15:36:59.040024] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.017 [2024-12-06 15:36:59.040077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.017 [2024-12-06 15:36:59.095706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.017 BaseBdev1 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.017 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.017 [ 00:10:16.017 { 00:10:16.017 "name": "BaseBdev1", 00:10:16.017 "aliases": [ 00:10:16.017 "e3510b5a-0ccf-4645-a546-e048356b9e6d" 00:10:16.017 ], 00:10:16.017 "product_name": "Malloc disk", 00:10:16.017 "block_size": 512, 00:10:16.017 "num_blocks": 65536, 00:10:16.017 "uuid": "e3510b5a-0ccf-4645-a546-e048356b9e6d", 00:10:16.017 "assigned_rate_limits": { 00:10:16.017 "rw_ios_per_sec": 0, 00:10:16.017 "rw_mbytes_per_sec": 0, 00:10:16.017 "r_mbytes_per_sec": 0, 00:10:16.018 "w_mbytes_per_sec": 0 00:10:16.018 }, 00:10:16.018 "claimed": true, 00:10:16.018 "claim_type": "exclusive_write", 00:10:16.018 "zoned": false, 00:10:16.018 "supported_io_types": { 00:10:16.018 "read": true, 00:10:16.018 "write": true, 00:10:16.018 "unmap": true, 00:10:16.018 "flush": true, 00:10:16.018 "reset": true, 00:10:16.018 "nvme_admin": false, 00:10:16.018 "nvme_io": false, 00:10:16.018 "nvme_io_md": false, 00:10:16.018 "write_zeroes": true, 00:10:16.018 "zcopy": true, 00:10:16.018 "get_zone_info": false, 00:10:16.018 "zone_management": false, 00:10:16.018 "zone_append": false, 00:10:16.018 "compare": false, 00:10:16.018 "compare_and_write": false, 00:10:16.018 "abort": true, 00:10:16.018 "seek_hole": false, 00:10:16.018 "seek_data": false, 00:10:16.018 "copy": true, 00:10:16.018 "nvme_iov_md": false 00:10:16.018 }, 00:10:16.018 "memory_domains": [ 00:10:16.018 { 00:10:16.018 "dma_device_id": "system", 00:10:16.018 "dma_device_type": 1 00:10:16.018 }, 00:10:16.018 { 00:10:16.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.018 "dma_device_type": 2 00:10:16.018 } 00:10:16.018 ], 00:10:16.018 "driver_specific": {} 00:10:16.018 } 00:10:16.018 ] 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.018 "name": "Existed_Raid", 00:10:16.018 "uuid": "f7810abd-f4ff-44d7-b766-0a68820fbfa7", 00:10:16.018 "strip_size_kb": 64, 00:10:16.018 "state": "configuring", 00:10:16.018 "raid_level": "raid0", 00:10:16.018 "superblock": true, 00:10:16.018 "num_base_bdevs": 2, 00:10:16.018 "num_base_bdevs_discovered": 1, 00:10:16.018 "num_base_bdevs_operational": 2, 00:10:16.018 "base_bdevs_list": [ 00:10:16.018 { 00:10:16.018 "name": "BaseBdev1", 00:10:16.018 "uuid": "e3510b5a-0ccf-4645-a546-e048356b9e6d", 00:10:16.018 "is_configured": true, 00:10:16.018 "data_offset": 2048, 00:10:16.018 "data_size": 63488 00:10:16.018 }, 00:10:16.018 { 00:10:16.018 "name": "BaseBdev2", 00:10:16.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.018 "is_configured": false, 00:10:16.018 "data_offset": 0, 00:10:16.018 "data_size": 0 00:10:16.018 } 00:10:16.018 ] 00:10:16.018 }' 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.018 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.277 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.277 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.277 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.535 [2024-12-06 15:36:59.571294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.535 [2024-12-06 15:36:59.571368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.535 [2024-12-06 15:36:59.583325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.535 [2024-12-06 15:36:59.585828] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.535 [2024-12-06 15:36:59.585877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.535 "name": "Existed_Raid", 00:10:16.535 "uuid": "30e93dbd-4a25-489f-9335-82dda6098384", 00:10:16.535 "strip_size_kb": 64, 00:10:16.535 "state": "configuring", 00:10:16.535 "raid_level": "raid0", 00:10:16.535 "superblock": true, 00:10:16.535 "num_base_bdevs": 2, 00:10:16.535 "num_base_bdevs_discovered": 1, 00:10:16.535 "num_base_bdevs_operational": 2, 00:10:16.535 "base_bdevs_list": [ 00:10:16.535 { 00:10:16.535 "name": "BaseBdev1", 00:10:16.535 "uuid": "e3510b5a-0ccf-4645-a546-e048356b9e6d", 00:10:16.535 "is_configured": true, 00:10:16.535 "data_offset": 2048, 00:10:16.535 "data_size": 63488 00:10:16.535 }, 00:10:16.535 { 00:10:16.535 "name": "BaseBdev2", 00:10:16.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.535 "is_configured": false, 00:10:16.535 "data_offset": 0, 00:10:16.535 "data_size": 0 00:10:16.535 } 00:10:16.535 ] 00:10:16.535 }' 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.535 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.794 15:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:16.794 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.794 15:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.794 [2024-12-06 15:37:00.038347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.794 [2024-12-06 15:37:00.038789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:16.794 [2024-12-06 15:37:00.038816] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:16.794 [2024-12-06 15:37:00.039224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:16.794 BaseBdev2 00:10:16.794 [2024-12-06 15:37:00.039574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:16.794 [2024-12-06 15:37:00.039598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:16.794 [2024-12-06 15:37:00.039767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.794 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.794 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:16.794 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:16.794 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.794 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.794 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.794 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.794 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.794 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.794 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.794 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.794 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:16.794 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.794 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.794 [ 00:10:16.794 { 00:10:16.794 "name": "BaseBdev2", 00:10:16.794 "aliases": [ 00:10:16.794 "7460b4e8-bd33-4b55-87b2-d48a9b26f0f2" 00:10:16.794 ], 00:10:16.794 "product_name": "Malloc disk", 00:10:16.794 "block_size": 512, 00:10:16.794 "num_blocks": 65536, 00:10:16.794 "uuid": "7460b4e8-bd33-4b55-87b2-d48a9b26f0f2", 00:10:16.794 "assigned_rate_limits": { 00:10:16.794 "rw_ios_per_sec": 0, 00:10:16.794 "rw_mbytes_per_sec": 0, 00:10:16.794 "r_mbytes_per_sec": 0, 00:10:16.794 "w_mbytes_per_sec": 0 00:10:16.794 }, 00:10:16.794 "claimed": true, 00:10:16.794 "claim_type": "exclusive_write", 00:10:16.794 "zoned": false, 00:10:16.794 "supported_io_types": { 00:10:16.794 "read": true, 00:10:16.794 "write": true, 00:10:16.794 "unmap": true, 00:10:16.794 "flush": true, 00:10:16.794 "reset": true, 00:10:16.794 "nvme_admin": false, 00:10:16.794 "nvme_io": false, 00:10:16.794 "nvme_io_md": false, 00:10:16.794 "write_zeroes": true, 00:10:16.794 "zcopy": true, 00:10:16.794 "get_zone_info": false, 00:10:16.794 "zone_management": false, 00:10:16.794 "zone_append": false, 00:10:16.794 "compare": false, 00:10:16.794 "compare_and_write": false, 00:10:16.794 "abort": true, 00:10:17.053 "seek_hole": false, 00:10:17.053 "seek_data": false, 00:10:17.053 "copy": true, 00:10:17.053 "nvme_iov_md": false 00:10:17.053 }, 00:10:17.053 "memory_domains": [ 00:10:17.053 { 00:10:17.053 "dma_device_id": "system", 00:10:17.053 "dma_device_type": 1 00:10:17.053 }, 00:10:17.053 { 00:10:17.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.053 "dma_device_type": 2 00:10:17.053 } 00:10:17.053 ], 00:10:17.053 "driver_specific": {} 00:10:17.053 } 00:10:17.053 ] 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.053 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.053 "name": "Existed_Raid", 00:10:17.053 "uuid": "30e93dbd-4a25-489f-9335-82dda6098384", 00:10:17.053 "strip_size_kb": 64, 00:10:17.053 "state": "online", 00:10:17.053 "raid_level": "raid0", 00:10:17.053 "superblock": true, 00:10:17.053 "num_base_bdevs": 2, 00:10:17.053 "num_base_bdevs_discovered": 2, 00:10:17.053 "num_base_bdevs_operational": 2, 00:10:17.053 "base_bdevs_list": [ 00:10:17.053 { 00:10:17.053 "name": "BaseBdev1", 00:10:17.053 "uuid": "e3510b5a-0ccf-4645-a546-e048356b9e6d", 00:10:17.054 "is_configured": true, 00:10:17.054 "data_offset": 2048, 00:10:17.054 "data_size": 63488 00:10:17.054 }, 00:10:17.054 { 00:10:17.054 "name": "BaseBdev2", 00:10:17.054 "uuid": "7460b4e8-bd33-4b55-87b2-d48a9b26f0f2", 00:10:17.054 "is_configured": true, 00:10:17.054 "data_offset": 2048, 00:10:17.054 "data_size": 63488 00:10:17.054 } 00:10:17.054 ] 00:10:17.054 }' 00:10:17.054 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.054 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.313 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:17.313 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:17.313 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.313 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.313 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.313 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.313 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:17.313 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.313 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.313 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.313 [2024-12-06 15:37:00.530404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.313 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.313 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.313 "name": "Existed_Raid", 00:10:17.313 "aliases": [ 00:10:17.313 "30e93dbd-4a25-489f-9335-82dda6098384" 00:10:17.313 ], 00:10:17.313 "product_name": "Raid Volume", 00:10:17.313 "block_size": 512, 00:10:17.313 "num_blocks": 126976, 00:10:17.313 "uuid": "30e93dbd-4a25-489f-9335-82dda6098384", 00:10:17.313 "assigned_rate_limits": { 00:10:17.313 "rw_ios_per_sec": 0, 00:10:17.313 "rw_mbytes_per_sec": 0, 00:10:17.313 "r_mbytes_per_sec": 0, 00:10:17.313 "w_mbytes_per_sec": 0 00:10:17.313 }, 00:10:17.313 "claimed": false, 00:10:17.313 "zoned": false, 00:10:17.313 "supported_io_types": { 00:10:17.313 "read": true, 00:10:17.313 "write": true, 00:10:17.313 "unmap": true, 00:10:17.313 "flush": true, 00:10:17.313 "reset": true, 00:10:17.313 "nvme_admin": false, 00:10:17.313 "nvme_io": false, 00:10:17.313 "nvme_io_md": false, 00:10:17.313 "write_zeroes": true, 00:10:17.313 "zcopy": false, 00:10:17.313 "get_zone_info": false, 00:10:17.313 "zone_management": false, 00:10:17.313 "zone_append": false, 00:10:17.313 "compare": false, 00:10:17.313 "compare_and_write": false, 00:10:17.313 "abort": false, 00:10:17.313 "seek_hole": false, 00:10:17.313 "seek_data": false, 00:10:17.313 "copy": false, 00:10:17.313 "nvme_iov_md": false 00:10:17.313 }, 00:10:17.313 "memory_domains": [ 00:10:17.313 { 00:10:17.313 "dma_device_id": "system", 00:10:17.313 "dma_device_type": 1 00:10:17.313 }, 00:10:17.313 { 00:10:17.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.313 "dma_device_type": 2 00:10:17.313 }, 00:10:17.313 { 00:10:17.313 "dma_device_id": "system", 00:10:17.313 "dma_device_type": 1 00:10:17.313 }, 00:10:17.313 { 00:10:17.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.313 "dma_device_type": 2 00:10:17.313 } 00:10:17.313 ], 00:10:17.313 "driver_specific": { 00:10:17.313 "raid": { 00:10:17.313 "uuid": "30e93dbd-4a25-489f-9335-82dda6098384", 00:10:17.313 "strip_size_kb": 64, 00:10:17.313 "state": "online", 00:10:17.313 "raid_level": "raid0", 00:10:17.313 "superblock": true, 00:10:17.313 "num_base_bdevs": 2, 00:10:17.313 "num_base_bdevs_discovered": 2, 00:10:17.313 "num_base_bdevs_operational": 2, 00:10:17.313 "base_bdevs_list": [ 00:10:17.313 { 00:10:17.313 "name": "BaseBdev1", 00:10:17.313 "uuid": "e3510b5a-0ccf-4645-a546-e048356b9e6d", 00:10:17.313 "is_configured": true, 00:10:17.313 "data_offset": 2048, 00:10:17.313 "data_size": 63488 00:10:17.313 }, 00:10:17.313 { 00:10:17.313 "name": "BaseBdev2", 00:10:17.313 "uuid": "7460b4e8-bd33-4b55-87b2-d48a9b26f0f2", 00:10:17.313 "is_configured": true, 00:10:17.313 "data_offset": 2048, 00:10:17.313 "data_size": 63488 00:10:17.313 } 00:10:17.313 ] 00:10:17.313 } 00:10:17.313 } 00:10:17.313 }' 00:10:17.313 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:17.573 BaseBdev2' 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.573 [2024-12-06 15:37:00.734151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.573 [2024-12-06 15:37:00.734304] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.573 [2024-12-06 15:37:00.734396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.573 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.832 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.832 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.832 "name": "Existed_Raid", 00:10:17.832 "uuid": "30e93dbd-4a25-489f-9335-82dda6098384", 00:10:17.832 "strip_size_kb": 64, 00:10:17.832 "state": "offline", 00:10:17.832 "raid_level": "raid0", 00:10:17.832 "superblock": true, 00:10:17.832 "num_base_bdevs": 2, 00:10:17.832 "num_base_bdevs_discovered": 1, 00:10:17.832 "num_base_bdevs_operational": 1, 00:10:17.832 "base_bdevs_list": [ 00:10:17.832 { 00:10:17.832 "name": null, 00:10:17.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.832 "is_configured": false, 00:10:17.832 "data_offset": 0, 00:10:17.832 "data_size": 63488 00:10:17.832 }, 00:10:17.832 { 00:10:17.832 "name": "BaseBdev2", 00:10:17.832 "uuid": "7460b4e8-bd33-4b55-87b2-d48a9b26f0f2", 00:10:17.832 "is_configured": true, 00:10:17.832 "data_offset": 2048, 00:10:17.832 "data_size": 63488 00:10:17.832 } 00:10:17.832 ] 00:10:17.832 }' 00:10:17.832 15:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.832 15:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.092 15:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:18.092 15:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.092 15:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.092 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.092 15:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:18.092 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.092 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.092 15:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:18.092 15:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:18.092 15:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:18.092 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.092 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.092 [2024-12-06 15:37:01.303525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.092 [2024-12-06 15:37:01.303762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:18.350 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.350 15:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60978 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60978 ']' 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60978 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60978 00:10:18.351 killing process with pid 60978 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60978' 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60978 00:10:18.351 [2024-12-06 15:37:01.510210] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.351 15:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60978 00:10:18.351 [2024-12-06 15:37:01.529313] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.807 15:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:19.807 00:10:19.807 real 0m5.144s 00:10:19.807 user 0m7.128s 00:10:19.807 sys 0m1.053s 00:10:19.807 ************************************ 00:10:19.807 END TEST raid_state_function_test_sb 00:10:19.807 ************************************ 00:10:19.807 15:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.807 15:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.807 15:37:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:19.807 15:37:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:19.807 15:37:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.807 15:37:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.807 ************************************ 00:10:19.807 START TEST raid_superblock_test 00:10:19.807 ************************************ 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:19.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61230 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61230 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61230 ']' 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.807 15:37:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.807 [2024-12-06 15:37:02.976696] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:10:19.807 [2024-12-06 15:37:02.976853] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61230 ] 00:10:20.066 [2024-12-06 15:37:03.153614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.066 [2024-12-06 15:37:03.300948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.326 [2024-12-06 15:37:03.548419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.326 [2024-12-06 15:37:03.548521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.586 malloc1 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.586 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.845 [2024-12-06 15:37:03.881111] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:20.845 [2024-12-06 15:37:03.881310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.845 [2024-12-06 15:37:03.881350] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:20.845 [2024-12-06 15:37:03.881363] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.845 [2024-12-06 15:37:03.884134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.845 [2024-12-06 15:37:03.884176] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:20.845 pt1 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.845 malloc2 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.845 [2024-12-06 15:37:03.941678] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.845 [2024-12-06 15:37:03.941863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.845 [2024-12-06 15:37:03.942024] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:20.845 [2024-12-06 15:37:03.942124] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.845 [2024-12-06 15:37:03.944958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.845 [2024-12-06 15:37:03.945088] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.845 pt2 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.845 [2024-12-06 15:37:03.953733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:20.845 [2024-12-06 15:37:03.956164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.845 [2024-12-06 15:37:03.956344] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:20.845 [2024-12-06 15:37:03.956359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:20.845 [2024-12-06 15:37:03.956653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:20.845 [2024-12-06 15:37:03.956820] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:20.845 [2024-12-06 15:37:03.956834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:20.845 [2024-12-06 15:37:03.956997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:20.845 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.846 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.846 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.846 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.846 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.846 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.846 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.846 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.846 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.846 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.846 15:37:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.846 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.846 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.846 15:37:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.846 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.846 "name": "raid_bdev1", 00:10:20.846 "uuid": "c06ea8c4-888e-4a64-b90f-8259e3d1e386", 00:10:20.846 "strip_size_kb": 64, 00:10:20.846 "state": "online", 00:10:20.846 "raid_level": "raid0", 00:10:20.846 "superblock": true, 00:10:20.846 "num_base_bdevs": 2, 00:10:20.846 "num_base_bdevs_discovered": 2, 00:10:20.846 "num_base_bdevs_operational": 2, 00:10:20.846 "base_bdevs_list": [ 00:10:20.846 { 00:10:20.846 "name": "pt1", 00:10:20.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.846 "is_configured": true, 00:10:20.846 "data_offset": 2048, 00:10:20.846 "data_size": 63488 00:10:20.846 }, 00:10:20.846 { 00:10:20.846 "name": "pt2", 00:10:20.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.846 "is_configured": true, 00:10:20.846 "data_offset": 2048, 00:10:20.846 "data_size": 63488 00:10:20.846 } 00:10:20.846 ] 00:10:20.846 }' 00:10:20.846 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.846 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.105 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:21.105 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:21.105 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.105 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.105 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.105 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.105 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.105 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.105 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.105 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.364 [2024-12-06 15:37:04.397739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.364 "name": "raid_bdev1", 00:10:21.364 "aliases": [ 00:10:21.364 "c06ea8c4-888e-4a64-b90f-8259e3d1e386" 00:10:21.364 ], 00:10:21.364 "product_name": "Raid Volume", 00:10:21.364 "block_size": 512, 00:10:21.364 "num_blocks": 126976, 00:10:21.364 "uuid": "c06ea8c4-888e-4a64-b90f-8259e3d1e386", 00:10:21.364 "assigned_rate_limits": { 00:10:21.364 "rw_ios_per_sec": 0, 00:10:21.364 "rw_mbytes_per_sec": 0, 00:10:21.364 "r_mbytes_per_sec": 0, 00:10:21.364 "w_mbytes_per_sec": 0 00:10:21.364 }, 00:10:21.364 "claimed": false, 00:10:21.364 "zoned": false, 00:10:21.364 "supported_io_types": { 00:10:21.364 "read": true, 00:10:21.364 "write": true, 00:10:21.364 "unmap": true, 00:10:21.364 "flush": true, 00:10:21.364 "reset": true, 00:10:21.364 "nvme_admin": false, 00:10:21.364 "nvme_io": false, 00:10:21.364 "nvme_io_md": false, 00:10:21.364 "write_zeroes": true, 00:10:21.364 "zcopy": false, 00:10:21.364 "get_zone_info": false, 00:10:21.364 "zone_management": false, 00:10:21.364 "zone_append": false, 00:10:21.364 "compare": false, 00:10:21.364 "compare_and_write": false, 00:10:21.364 "abort": false, 00:10:21.364 "seek_hole": false, 00:10:21.364 "seek_data": false, 00:10:21.364 "copy": false, 00:10:21.364 "nvme_iov_md": false 00:10:21.364 }, 00:10:21.364 "memory_domains": [ 00:10:21.364 { 00:10:21.364 "dma_device_id": "system", 00:10:21.364 "dma_device_type": 1 00:10:21.364 }, 00:10:21.364 { 00:10:21.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.364 "dma_device_type": 2 00:10:21.364 }, 00:10:21.364 { 00:10:21.364 "dma_device_id": "system", 00:10:21.364 "dma_device_type": 1 00:10:21.364 }, 00:10:21.364 { 00:10:21.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.364 "dma_device_type": 2 00:10:21.364 } 00:10:21.364 ], 00:10:21.364 "driver_specific": { 00:10:21.364 "raid": { 00:10:21.364 "uuid": "c06ea8c4-888e-4a64-b90f-8259e3d1e386", 00:10:21.364 "strip_size_kb": 64, 00:10:21.364 "state": "online", 00:10:21.364 "raid_level": "raid0", 00:10:21.364 "superblock": true, 00:10:21.364 "num_base_bdevs": 2, 00:10:21.364 "num_base_bdevs_discovered": 2, 00:10:21.364 "num_base_bdevs_operational": 2, 00:10:21.364 "base_bdevs_list": [ 00:10:21.364 { 00:10:21.364 "name": "pt1", 00:10:21.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.364 "is_configured": true, 00:10:21.364 "data_offset": 2048, 00:10:21.364 "data_size": 63488 00:10:21.364 }, 00:10:21.364 { 00:10:21.364 "name": "pt2", 00:10:21.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.364 "is_configured": true, 00:10:21.364 "data_offset": 2048, 00:10:21.364 "data_size": 63488 00:10:21.364 } 00:10:21.364 ] 00:10:21.364 } 00:10:21.364 } 00:10:21.364 }' 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:21.364 pt2' 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:21.364 [2024-12-06 15:37:04.621381] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c06ea8c4-888e-4a64-b90f-8259e3d1e386 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c06ea8c4-888e-4a64-b90f-8259e3d1e386 ']' 00:10:21.364 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:21.365 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.365 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.365 [2024-12-06 15:37:04.653005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.365 [2024-12-06 15:37:04.653139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.365 [2024-12-06 15:37:04.653270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.365 [2024-12-06 15:37:04.653331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.365 [2024-12-06 15:37:04.653348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:21.624 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.625 [2024-12-06 15:37:04.780888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:21.625 [2024-12-06 15:37:04.783395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:21.625 [2024-12-06 15:37:04.783478] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:21.625 [2024-12-06 15:37:04.783564] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:21.625 [2024-12-06 15:37:04.783588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.625 [2024-12-06 15:37:04.783614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:21.625 request: 00:10:21.625 { 00:10:21.625 "name": "raid_bdev1", 00:10:21.625 "raid_level": "raid0", 00:10:21.625 "base_bdevs": [ 00:10:21.625 "malloc1", 00:10:21.625 "malloc2" 00:10:21.625 ], 00:10:21.625 "strip_size_kb": 64, 00:10:21.625 "superblock": false, 00:10:21.625 "method": "bdev_raid_create", 00:10:21.625 "req_id": 1 00:10:21.625 } 00:10:21.625 Got JSON-RPC error response 00:10:21.625 response: 00:10:21.625 { 00:10:21.625 "code": -17, 00:10:21.625 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:21.625 } 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.625 [2024-12-06 15:37:04.848810] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:21.625 [2024-12-06 15:37:04.848915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.625 [2024-12-06 15:37:04.848952] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:21.625 [2024-12-06 15:37:04.848976] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.625 [2024-12-06 15:37:04.852758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.625 [2024-12-06 15:37:04.853005] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:21.625 [2024-12-06 15:37:04.853176] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:21.625 [2024-12-06 15:37:04.853287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:21.625 pt1 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.625 "name": "raid_bdev1", 00:10:21.625 "uuid": "c06ea8c4-888e-4a64-b90f-8259e3d1e386", 00:10:21.625 "strip_size_kb": 64, 00:10:21.625 "state": "configuring", 00:10:21.625 "raid_level": "raid0", 00:10:21.625 "superblock": true, 00:10:21.625 "num_base_bdevs": 2, 00:10:21.625 "num_base_bdevs_discovered": 1, 00:10:21.625 "num_base_bdevs_operational": 2, 00:10:21.625 "base_bdevs_list": [ 00:10:21.625 { 00:10:21.625 "name": "pt1", 00:10:21.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.625 "is_configured": true, 00:10:21.625 "data_offset": 2048, 00:10:21.625 "data_size": 63488 00:10:21.625 }, 00:10:21.625 { 00:10:21.625 "name": null, 00:10:21.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.625 "is_configured": false, 00:10:21.625 "data_offset": 2048, 00:10:21.625 "data_size": 63488 00:10:21.625 } 00:10:21.625 ] 00:10:21.625 }' 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.625 15:37:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.191 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:22.191 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:22.191 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.191 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:22.191 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.191 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.192 [2024-12-06 15:37:05.292709] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:22.192 [2024-12-06 15:37:05.292812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.192 [2024-12-06 15:37:05.292842] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:22.192 [2024-12-06 15:37:05.292858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.192 [2024-12-06 15:37:05.293454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.192 [2024-12-06 15:37:05.293489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:22.192 [2024-12-06 15:37:05.293614] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:22.192 [2024-12-06 15:37:05.293654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.192 [2024-12-06 15:37:05.293793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:22.192 [2024-12-06 15:37:05.293807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:22.192 [2024-12-06 15:37:05.294106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:22.192 [2024-12-06 15:37:05.294250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:22.192 [2024-12-06 15:37:05.294260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:22.192 [2024-12-06 15:37:05.294406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.192 pt2 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.192 "name": "raid_bdev1", 00:10:22.192 "uuid": "c06ea8c4-888e-4a64-b90f-8259e3d1e386", 00:10:22.192 "strip_size_kb": 64, 00:10:22.192 "state": "online", 00:10:22.192 "raid_level": "raid0", 00:10:22.192 "superblock": true, 00:10:22.192 "num_base_bdevs": 2, 00:10:22.192 "num_base_bdevs_discovered": 2, 00:10:22.192 "num_base_bdevs_operational": 2, 00:10:22.192 "base_bdevs_list": [ 00:10:22.192 { 00:10:22.192 "name": "pt1", 00:10:22.192 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.192 "is_configured": true, 00:10:22.192 "data_offset": 2048, 00:10:22.192 "data_size": 63488 00:10:22.192 }, 00:10:22.192 { 00:10:22.192 "name": "pt2", 00:10:22.192 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.192 "is_configured": true, 00:10:22.192 "data_offset": 2048, 00:10:22.192 "data_size": 63488 00:10:22.192 } 00:10:22.192 ] 00:10:22.192 }' 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.192 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.450 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:22.450 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:22.450 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.450 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.450 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.450 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.450 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.450 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.450 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.450 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.450 [2024-12-06 15:37:05.688748] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.450 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.450 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.450 "name": "raid_bdev1", 00:10:22.450 "aliases": [ 00:10:22.450 "c06ea8c4-888e-4a64-b90f-8259e3d1e386" 00:10:22.450 ], 00:10:22.450 "product_name": "Raid Volume", 00:10:22.450 "block_size": 512, 00:10:22.450 "num_blocks": 126976, 00:10:22.450 "uuid": "c06ea8c4-888e-4a64-b90f-8259e3d1e386", 00:10:22.450 "assigned_rate_limits": { 00:10:22.450 "rw_ios_per_sec": 0, 00:10:22.450 "rw_mbytes_per_sec": 0, 00:10:22.450 "r_mbytes_per_sec": 0, 00:10:22.450 "w_mbytes_per_sec": 0 00:10:22.450 }, 00:10:22.450 "claimed": false, 00:10:22.450 "zoned": false, 00:10:22.450 "supported_io_types": { 00:10:22.450 "read": true, 00:10:22.450 "write": true, 00:10:22.450 "unmap": true, 00:10:22.450 "flush": true, 00:10:22.450 "reset": true, 00:10:22.450 "nvme_admin": false, 00:10:22.450 "nvme_io": false, 00:10:22.450 "nvme_io_md": false, 00:10:22.450 "write_zeroes": true, 00:10:22.450 "zcopy": false, 00:10:22.450 "get_zone_info": false, 00:10:22.450 "zone_management": false, 00:10:22.450 "zone_append": false, 00:10:22.450 "compare": false, 00:10:22.450 "compare_and_write": false, 00:10:22.450 "abort": false, 00:10:22.450 "seek_hole": false, 00:10:22.450 "seek_data": false, 00:10:22.450 "copy": false, 00:10:22.450 "nvme_iov_md": false 00:10:22.450 }, 00:10:22.450 "memory_domains": [ 00:10:22.450 { 00:10:22.450 "dma_device_id": "system", 00:10:22.450 "dma_device_type": 1 00:10:22.450 }, 00:10:22.450 { 00:10:22.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.450 "dma_device_type": 2 00:10:22.450 }, 00:10:22.450 { 00:10:22.450 "dma_device_id": "system", 00:10:22.450 "dma_device_type": 1 00:10:22.450 }, 00:10:22.450 { 00:10:22.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.450 "dma_device_type": 2 00:10:22.450 } 00:10:22.450 ], 00:10:22.450 "driver_specific": { 00:10:22.450 "raid": { 00:10:22.450 "uuid": "c06ea8c4-888e-4a64-b90f-8259e3d1e386", 00:10:22.450 "strip_size_kb": 64, 00:10:22.450 "state": "online", 00:10:22.450 "raid_level": "raid0", 00:10:22.450 "superblock": true, 00:10:22.450 "num_base_bdevs": 2, 00:10:22.450 "num_base_bdevs_discovered": 2, 00:10:22.450 "num_base_bdevs_operational": 2, 00:10:22.450 "base_bdevs_list": [ 00:10:22.450 { 00:10:22.450 "name": "pt1", 00:10:22.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.450 "is_configured": true, 00:10:22.450 "data_offset": 2048, 00:10:22.450 "data_size": 63488 00:10:22.450 }, 00:10:22.450 { 00:10:22.450 "name": "pt2", 00:10:22.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.450 "is_configured": true, 00:10:22.450 "data_offset": 2048, 00:10:22.450 "data_size": 63488 00:10:22.450 } 00:10:22.450 ] 00:10:22.450 } 00:10:22.450 } 00:10:22.450 }' 00:10:22.450 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.708 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:22.708 pt2' 00:10:22.708 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.708 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.708 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.708 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:22.708 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.708 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.708 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.708 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.708 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.708 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.708 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.708 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:22.708 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.708 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.709 [2024-12-06 15:37:05.920411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c06ea8c4-888e-4a64-b90f-8259e3d1e386 '!=' c06ea8c4-888e-4a64-b90f-8259e3d1e386 ']' 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61230 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61230 ']' 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61230 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.709 15:37:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61230 00:10:22.967 15:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.967 killing process with pid 61230 00:10:22.967 15:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.967 15:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61230' 00:10:22.967 15:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61230 00:10:22.967 [2024-12-06 15:37:06.024388] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.967 [2024-12-06 15:37:06.024525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.967 15:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61230 00:10:22.967 [2024-12-06 15:37:06.024590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.967 [2024-12-06 15:37:06.024606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:22.967 [2024-12-06 15:37:06.258064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.341 15:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:24.341 00:10:24.341 real 0m4.622s 00:10:24.341 user 0m6.264s 00:10:24.341 sys 0m0.924s 00:10:24.341 15:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.341 15:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.341 ************************************ 00:10:24.341 END TEST raid_superblock_test 00:10:24.341 ************************************ 00:10:24.341 15:37:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:10:24.341 15:37:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:24.342 15:37:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.342 15:37:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.342 ************************************ 00:10:24.342 START TEST raid_read_error_test 00:10:24.342 ************************************ 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8l16mt3121 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61436 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61436 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61436 ']' 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.342 15:37:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.599 [2024-12-06 15:37:07.700133] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:10:24.599 [2024-12-06 15:37:07.700289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61436 ] 00:10:24.599 [2024-12-06 15:37:07.887370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.857 [2024-12-06 15:37:08.035899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.115 [2024-12-06 15:37:08.274880] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.115 [2024-12-06 15:37:08.274948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.372 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.372 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:25.372 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.372 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:25.372 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.372 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.372 BaseBdev1_malloc 00:10:25.372 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.372 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:25.372 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.372 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.372 true 00:10:25.372 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.372 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:25.372 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.372 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.372 [2024-12-06 15:37:08.610014] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:25.372 [2024-12-06 15:37:08.610086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.372 [2024-12-06 15:37:08.610111] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:25.372 [2024-12-06 15:37:08.610127] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.373 [2024-12-06 15:37:08.612818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.373 [2024-12-06 15:37:08.612862] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:25.373 BaseBdev1 00:10:25.373 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.373 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.373 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:25.373 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.373 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.373 BaseBdev2_malloc 00:10:25.373 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.630 true 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.630 [2024-12-06 15:37:08.683871] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:25.630 [2024-12-06 15:37:08.683939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.630 [2024-12-06 15:37:08.683962] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:25.630 [2024-12-06 15:37:08.683976] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.630 [2024-12-06 15:37:08.686660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.630 [2024-12-06 15:37:08.686706] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:25.630 BaseBdev2 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.630 [2024-12-06 15:37:08.695941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.630 [2024-12-06 15:37:08.698347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.630 [2024-12-06 15:37:08.698573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:25.630 [2024-12-06 15:37:08.698594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:25.630 [2024-12-06 15:37:08.698857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:25.630 [2024-12-06 15:37:08.699045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:25.630 [2024-12-06 15:37:08.699070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:25.630 [2024-12-06 15:37:08.699227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.630 "name": "raid_bdev1", 00:10:25.630 "uuid": "6be16d80-99ae-4f5c-8f6c-0a4ab1023d4e", 00:10:25.630 "strip_size_kb": 64, 00:10:25.630 "state": "online", 00:10:25.630 "raid_level": "raid0", 00:10:25.630 "superblock": true, 00:10:25.630 "num_base_bdevs": 2, 00:10:25.630 "num_base_bdevs_discovered": 2, 00:10:25.630 "num_base_bdevs_operational": 2, 00:10:25.630 "base_bdevs_list": [ 00:10:25.630 { 00:10:25.630 "name": "BaseBdev1", 00:10:25.630 "uuid": "48c0b9e8-d604-5dcb-910b-5682fde42900", 00:10:25.630 "is_configured": true, 00:10:25.630 "data_offset": 2048, 00:10:25.630 "data_size": 63488 00:10:25.630 }, 00:10:25.630 { 00:10:25.630 "name": "BaseBdev2", 00:10:25.630 "uuid": "6a2762a4-63eb-5728-8018-8f9f8798497d", 00:10:25.630 "is_configured": true, 00:10:25.630 "data_offset": 2048, 00:10:25.630 "data_size": 63488 00:10:25.630 } 00:10:25.630 ] 00:10:25.630 }' 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.630 15:37:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.888 15:37:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:25.888 15:37:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:26.145 [2024-12-06 15:37:09.217067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:27.076 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:27.076 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.076 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.076 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.077 "name": "raid_bdev1", 00:10:27.077 "uuid": "6be16d80-99ae-4f5c-8f6c-0a4ab1023d4e", 00:10:27.077 "strip_size_kb": 64, 00:10:27.077 "state": "online", 00:10:27.077 "raid_level": "raid0", 00:10:27.077 "superblock": true, 00:10:27.077 "num_base_bdevs": 2, 00:10:27.077 "num_base_bdevs_discovered": 2, 00:10:27.077 "num_base_bdevs_operational": 2, 00:10:27.077 "base_bdevs_list": [ 00:10:27.077 { 00:10:27.077 "name": "BaseBdev1", 00:10:27.077 "uuid": "48c0b9e8-d604-5dcb-910b-5682fde42900", 00:10:27.077 "is_configured": true, 00:10:27.077 "data_offset": 2048, 00:10:27.077 "data_size": 63488 00:10:27.077 }, 00:10:27.077 { 00:10:27.077 "name": "BaseBdev2", 00:10:27.077 "uuid": "6a2762a4-63eb-5728-8018-8f9f8798497d", 00:10:27.077 "is_configured": true, 00:10:27.077 "data_offset": 2048, 00:10:27.077 "data_size": 63488 00:10:27.077 } 00:10:27.077 ] 00:10:27.077 }' 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.077 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.334 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.334 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.334 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.334 [2024-12-06 15:37:10.562690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.334 [2024-12-06 15:37:10.562757] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.334 [2024-12-06 15:37:10.565965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.334 [2024-12-06 15:37:10.566030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.334 [2024-12-06 15:37:10.566070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.334 [2024-12-06 15:37:10.566086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:27.334 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.334 15:37:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61436 00:10:27.334 { 00:10:27.334 "results": [ 00:10:27.334 { 00:10:27.334 "job": "raid_bdev1", 00:10:27.334 "core_mask": "0x1", 00:10:27.334 "workload": "randrw", 00:10:27.334 "percentage": 50, 00:10:27.334 "status": "finished", 00:10:27.334 "queue_depth": 1, 00:10:27.334 "io_size": 131072, 00:10:27.334 "runtime": 1.345585, 00:10:27.334 "iops": 13423.901128505446, 00:10:27.334 "mibps": 1677.9876410631807, 00:10:27.334 "io_failed": 1, 00:10:27.334 "io_timeout": 0, 00:10:27.334 "avg_latency_us": 104.1823897894501, 00:10:27.334 "min_latency_us": 27.553413654618474, 00:10:27.335 "max_latency_us": 1441.0024096385541 00:10:27.335 } 00:10:27.335 ], 00:10:27.335 "core_count": 1 00:10:27.335 } 00:10:27.335 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61436 ']' 00:10:27.335 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61436 00:10:27.335 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:27.335 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.335 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61436 00:10:27.335 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.335 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.335 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61436' 00:10:27.335 killing process with pid 61436 00:10:27.335 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61436 00:10:27.335 [2024-12-06 15:37:10.615318] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.335 15:37:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61436 00:10:27.592 [2024-12-06 15:37:10.768458] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.965 15:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8l16mt3121 00:10:28.965 15:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:28.965 15:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:28.965 15:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:28.965 15:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:28.965 15:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.965 15:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:28.965 15:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:28.965 00:10:28.965 real 0m4.534s 00:10:28.965 user 0m5.230s 00:10:28.965 sys 0m0.711s 00:10:28.965 15:37:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.965 15:37:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.965 ************************************ 00:10:28.965 END TEST raid_read_error_test 00:10:28.965 ************************************ 00:10:28.965 15:37:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:10:28.965 15:37:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:28.965 15:37:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.965 15:37:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.965 ************************************ 00:10:28.965 START TEST raid_write_error_test 00:10:28.965 ************************************ 00:10:28.965 15:37:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:10:28.965 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:28.965 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:28.965 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:28.965 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:28.965 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.965 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:28.965 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.965 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.965 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:28.965 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.965 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.965 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NwkOBw4s7z 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61582 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61582 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61582 ']' 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.966 15:37:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.250 [2024-12-06 15:37:12.313605] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:10:29.250 [2024-12-06 15:37:12.314128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61582 ] 00:10:29.250 [2024-12-06 15:37:12.501650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.532 [2024-12-06 15:37:12.650068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.792 [2024-12-06 15:37:12.904799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.792 [2024-12-06 15:37:12.904889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.051 BaseBdev1_malloc 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.051 true 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.051 [2024-12-06 15:37:13.227464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:30.051 [2024-12-06 15:37:13.227551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.051 [2024-12-06 15:37:13.227577] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:30.051 [2024-12-06 15:37:13.227592] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.051 [2024-12-06 15:37:13.230346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.051 [2024-12-06 15:37:13.230527] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:30.051 BaseBdev1 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.051 BaseBdev2_malloc 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.051 true 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.051 [2024-12-06 15:37:13.305093] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:30.051 [2024-12-06 15:37:13.305154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.051 [2024-12-06 15:37:13.305175] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:30.051 [2024-12-06 15:37:13.305189] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.051 [2024-12-06 15:37:13.307918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.051 [2024-12-06 15:37:13.307962] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:30.051 BaseBdev2 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.051 [2024-12-06 15:37:13.317154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.051 [2024-12-06 15:37:13.319778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.051 [2024-12-06 15:37:13.319984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:30.051 [2024-12-06 15:37:13.320004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:30.051 [2024-12-06 15:37:13.320274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:30.051 [2024-12-06 15:37:13.320470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:30.051 [2024-12-06 15:37:13.320485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:30.051 [2024-12-06 15:37:13.320803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.051 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.052 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.052 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.052 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.052 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.052 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.052 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.052 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.052 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.052 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.052 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.310 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.310 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.310 "name": "raid_bdev1", 00:10:30.310 "uuid": "4ab3621b-cf92-4d34-ad04-cc99286dbda9", 00:10:30.310 "strip_size_kb": 64, 00:10:30.310 "state": "online", 00:10:30.310 "raid_level": "raid0", 00:10:30.310 "superblock": true, 00:10:30.310 "num_base_bdevs": 2, 00:10:30.310 "num_base_bdevs_discovered": 2, 00:10:30.310 "num_base_bdevs_operational": 2, 00:10:30.310 "base_bdevs_list": [ 00:10:30.310 { 00:10:30.310 "name": "BaseBdev1", 00:10:30.310 "uuid": "2fc94e74-db74-5885-9e94-c1ddb54b94c1", 00:10:30.310 "is_configured": true, 00:10:30.310 "data_offset": 2048, 00:10:30.310 "data_size": 63488 00:10:30.310 }, 00:10:30.310 { 00:10:30.310 "name": "BaseBdev2", 00:10:30.310 "uuid": "f4382d33-e17c-5e2c-be40-a5e45ee0d1e4", 00:10:30.310 "is_configured": true, 00:10:30.310 "data_offset": 2048, 00:10:30.310 "data_size": 63488 00:10:30.310 } 00:10:30.310 ] 00:10:30.311 }' 00:10:30.311 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.311 15:37:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.569 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:30.569 15:37:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:30.569 [2024-12-06 15:37:13.846260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:31.504 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:31.504 15:37:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.504 15:37:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.504 15:37:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.504 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:31.504 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:31.505 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:31.505 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:31.505 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.505 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.505 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.505 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.505 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.505 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.505 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.505 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.505 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.505 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.505 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.505 15:37:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.505 15:37:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.763 15:37:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.763 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.763 "name": "raid_bdev1", 00:10:31.763 "uuid": "4ab3621b-cf92-4d34-ad04-cc99286dbda9", 00:10:31.763 "strip_size_kb": 64, 00:10:31.763 "state": "online", 00:10:31.763 "raid_level": "raid0", 00:10:31.763 "superblock": true, 00:10:31.763 "num_base_bdevs": 2, 00:10:31.763 "num_base_bdevs_discovered": 2, 00:10:31.763 "num_base_bdevs_operational": 2, 00:10:31.763 "base_bdevs_list": [ 00:10:31.763 { 00:10:31.763 "name": "BaseBdev1", 00:10:31.763 "uuid": "2fc94e74-db74-5885-9e94-c1ddb54b94c1", 00:10:31.763 "is_configured": true, 00:10:31.763 "data_offset": 2048, 00:10:31.763 "data_size": 63488 00:10:31.763 }, 00:10:31.763 { 00:10:31.763 "name": "BaseBdev2", 00:10:31.763 "uuid": "f4382d33-e17c-5e2c-be40-a5e45ee0d1e4", 00:10:31.763 "is_configured": true, 00:10:31.763 "data_offset": 2048, 00:10:31.763 "data_size": 63488 00:10:31.763 } 00:10:31.763 ] 00:10:31.763 }' 00:10:31.764 15:37:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.764 15:37:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.021 15:37:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.021 15:37:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.021 15:37:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.021 [2024-12-06 15:37:15.187379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.021 [2024-12-06 15:37:15.187598] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.021 [2024-12-06 15:37:15.190584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.021 [2024-12-06 15:37:15.190762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.021 [2024-12-06 15:37:15.190841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.021 [2024-12-06 15:37:15.190950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta{ 00:10:32.021 "results": [ 00:10:32.021 { 00:10:32.021 "job": "raid_bdev1", 00:10:32.021 "core_mask": "0x1", 00:10:32.021 "workload": "randrw", 00:10:32.021 "percentage": 50, 00:10:32.021 "status": "finished", 00:10:32.021 "queue_depth": 1, 00:10:32.021 "io_size": 131072, 00:10:32.021 "runtime": 1.341264, 00:10:32.021 "iops": 13881.6817569099, 00:10:32.021 "mibps": 1735.2102196137375, 00:10:32.021 "io_failed": 1, 00:10:32.021 "io_timeout": 0, 00:10:32.021 "avg_latency_us": 100.8979564228989, 00:10:32.021 "min_latency_us": 27.964658634538154, 00:10:32.021 "max_latency_us": 1427.8425702811246 00:10:32.021 } 00:10:32.021 ], 00:10:32.021 "core_count": 1 00:10:32.021 } 00:10:32.021 te offline 00:10:32.021 15:37:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.021 15:37:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61582 00:10:32.021 15:37:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61582 ']' 00:10:32.021 15:37:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61582 00:10:32.021 15:37:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:32.021 15:37:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.021 15:37:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61582 00:10:32.021 15:37:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.021 15:37:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.021 15:37:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61582' 00:10:32.021 killing process with pid 61582 00:10:32.021 15:37:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61582 00:10:32.021 [2024-12-06 15:37:15.227316] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.021 15:37:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61582 00:10:32.279 [2024-12-06 15:37:15.380890] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.654 15:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NwkOBw4s7z 00:10:33.654 15:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:33.654 15:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:33.654 15:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:33.654 15:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:33.654 ************************************ 00:10:33.654 END TEST raid_write_error_test 00:10:33.654 ************************************ 00:10:33.654 15:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.654 15:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:33.654 15:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:33.654 00:10:33.654 real 0m4.518s 00:10:33.654 user 0m5.237s 00:10:33.654 sys 0m0.685s 00:10:33.654 15:37:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.654 15:37:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.655 15:37:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:33.655 15:37:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:33.655 15:37:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:33.655 15:37:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.655 15:37:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.655 ************************************ 00:10:33.655 START TEST raid_state_function_test 00:10:33.655 ************************************ 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61725 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61725' 00:10:33.655 Process raid pid: 61725 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61725 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61725 ']' 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.655 15:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.655 [2024-12-06 15:37:16.886738] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:10:33.655 [2024-12-06 15:37:16.887171] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.914 [2024-12-06 15:37:17.070025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.174 [2024-12-06 15:37:17.224143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.174 [2024-12-06 15:37:17.466400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.174 [2024-12-06 15:37:17.466470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.744 [2024-12-06 15:37:17.737792] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.744 [2024-12-06 15:37:17.737874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.744 [2024-12-06 15:37:17.737887] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.744 [2024-12-06 15:37:17.737920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.744 "name": "Existed_Raid", 00:10:34.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.744 "strip_size_kb": 64, 00:10:34.744 "state": "configuring", 00:10:34.744 "raid_level": "concat", 00:10:34.744 "superblock": false, 00:10:34.744 "num_base_bdevs": 2, 00:10:34.744 "num_base_bdevs_discovered": 0, 00:10:34.744 "num_base_bdevs_operational": 2, 00:10:34.744 "base_bdevs_list": [ 00:10:34.744 { 00:10:34.744 "name": "BaseBdev1", 00:10:34.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.744 "is_configured": false, 00:10:34.744 "data_offset": 0, 00:10:34.744 "data_size": 0 00:10:34.744 }, 00:10:34.744 { 00:10:34.744 "name": "BaseBdev2", 00:10:34.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.744 "is_configured": false, 00:10:34.744 "data_offset": 0, 00:10:34.744 "data_size": 0 00:10:34.744 } 00:10:34.744 ] 00:10:34.744 }' 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.744 15:37:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.003 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.003 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.003 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.003 [2024-12-06 15:37:18.177136] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.003 [2024-12-06 15:37:18.177188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:35.003 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.003 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:35.003 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.003 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.003 [2024-12-06 15:37:18.185118] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.003 [2024-12-06 15:37:18.185188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.003 [2024-12-06 15:37:18.185200] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.003 [2024-12-06 15:37:18.185218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.003 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.003 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:35.003 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.003 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.004 [2024-12-06 15:37:18.238478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.004 BaseBdev1 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.004 [ 00:10:35.004 { 00:10:35.004 "name": "BaseBdev1", 00:10:35.004 "aliases": [ 00:10:35.004 "ddea562b-d405-44ec-bc48-92524313e3b6" 00:10:35.004 ], 00:10:35.004 "product_name": "Malloc disk", 00:10:35.004 "block_size": 512, 00:10:35.004 "num_blocks": 65536, 00:10:35.004 "uuid": "ddea562b-d405-44ec-bc48-92524313e3b6", 00:10:35.004 "assigned_rate_limits": { 00:10:35.004 "rw_ios_per_sec": 0, 00:10:35.004 "rw_mbytes_per_sec": 0, 00:10:35.004 "r_mbytes_per_sec": 0, 00:10:35.004 "w_mbytes_per_sec": 0 00:10:35.004 }, 00:10:35.004 "claimed": true, 00:10:35.004 "claim_type": "exclusive_write", 00:10:35.004 "zoned": false, 00:10:35.004 "supported_io_types": { 00:10:35.004 "read": true, 00:10:35.004 "write": true, 00:10:35.004 "unmap": true, 00:10:35.004 "flush": true, 00:10:35.004 "reset": true, 00:10:35.004 "nvme_admin": false, 00:10:35.004 "nvme_io": false, 00:10:35.004 "nvme_io_md": false, 00:10:35.004 "write_zeroes": true, 00:10:35.004 "zcopy": true, 00:10:35.004 "get_zone_info": false, 00:10:35.004 "zone_management": false, 00:10:35.004 "zone_append": false, 00:10:35.004 "compare": false, 00:10:35.004 "compare_and_write": false, 00:10:35.004 "abort": true, 00:10:35.004 "seek_hole": false, 00:10:35.004 "seek_data": false, 00:10:35.004 "copy": true, 00:10:35.004 "nvme_iov_md": false 00:10:35.004 }, 00:10:35.004 "memory_domains": [ 00:10:35.004 { 00:10:35.004 "dma_device_id": "system", 00:10:35.004 "dma_device_type": 1 00:10:35.004 }, 00:10:35.004 { 00:10:35.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.004 "dma_device_type": 2 00:10:35.004 } 00:10:35.004 ], 00:10:35.004 "driver_specific": {} 00:10:35.004 } 00:10:35.004 ] 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.004 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.263 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.263 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.263 "name": "Existed_Raid", 00:10:35.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.263 "strip_size_kb": 64, 00:10:35.263 "state": "configuring", 00:10:35.263 "raid_level": "concat", 00:10:35.263 "superblock": false, 00:10:35.263 "num_base_bdevs": 2, 00:10:35.263 "num_base_bdevs_discovered": 1, 00:10:35.263 "num_base_bdevs_operational": 2, 00:10:35.263 "base_bdevs_list": [ 00:10:35.263 { 00:10:35.263 "name": "BaseBdev1", 00:10:35.263 "uuid": "ddea562b-d405-44ec-bc48-92524313e3b6", 00:10:35.263 "is_configured": true, 00:10:35.263 "data_offset": 0, 00:10:35.263 "data_size": 65536 00:10:35.263 }, 00:10:35.263 { 00:10:35.263 "name": "BaseBdev2", 00:10:35.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.263 "is_configured": false, 00:10:35.263 "data_offset": 0, 00:10:35.263 "data_size": 0 00:10:35.263 } 00:10:35.263 ] 00:10:35.263 }' 00:10:35.263 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.263 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.522 [2024-12-06 15:37:18.686208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.522 [2024-12-06 15:37:18.686297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.522 [2024-12-06 15:37:18.698316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.522 [2024-12-06 15:37:18.700993] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.522 [2024-12-06 15:37:18.701292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.522 "name": "Existed_Raid", 00:10:35.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.522 "strip_size_kb": 64, 00:10:35.522 "state": "configuring", 00:10:35.522 "raid_level": "concat", 00:10:35.522 "superblock": false, 00:10:35.522 "num_base_bdevs": 2, 00:10:35.522 "num_base_bdevs_discovered": 1, 00:10:35.522 "num_base_bdevs_operational": 2, 00:10:35.522 "base_bdevs_list": [ 00:10:35.522 { 00:10:35.522 "name": "BaseBdev1", 00:10:35.522 "uuid": "ddea562b-d405-44ec-bc48-92524313e3b6", 00:10:35.522 "is_configured": true, 00:10:35.522 "data_offset": 0, 00:10:35.522 "data_size": 65536 00:10:35.522 }, 00:10:35.522 { 00:10:35.522 "name": "BaseBdev2", 00:10:35.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.522 "is_configured": false, 00:10:35.522 "data_offset": 0, 00:10:35.522 "data_size": 0 00:10:35.522 } 00:10:35.522 ] 00:10:35.522 }' 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.522 15:37:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.091 [2024-12-06 15:37:19.175175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.091 [2024-12-06 15:37:19.175253] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:36.091 [2024-12-06 15:37:19.175263] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:36.091 [2024-12-06 15:37:19.175630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:36.091 [2024-12-06 15:37:19.175845] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:36.091 [2024-12-06 15:37:19.175861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:36.091 [2024-12-06 15:37:19.176249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.091 BaseBdev2 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.091 [ 00:10:36.091 { 00:10:36.091 "name": "BaseBdev2", 00:10:36.091 "aliases": [ 00:10:36.091 "d40745d0-949a-4c2c-81ff-1c062fab3d18" 00:10:36.091 ], 00:10:36.091 "product_name": "Malloc disk", 00:10:36.091 "block_size": 512, 00:10:36.091 "num_blocks": 65536, 00:10:36.091 "uuid": "d40745d0-949a-4c2c-81ff-1c062fab3d18", 00:10:36.091 "assigned_rate_limits": { 00:10:36.091 "rw_ios_per_sec": 0, 00:10:36.091 "rw_mbytes_per_sec": 0, 00:10:36.091 "r_mbytes_per_sec": 0, 00:10:36.091 "w_mbytes_per_sec": 0 00:10:36.091 }, 00:10:36.091 "claimed": true, 00:10:36.091 "claim_type": "exclusive_write", 00:10:36.091 "zoned": false, 00:10:36.091 "supported_io_types": { 00:10:36.091 "read": true, 00:10:36.091 "write": true, 00:10:36.091 "unmap": true, 00:10:36.091 "flush": true, 00:10:36.091 "reset": true, 00:10:36.091 "nvme_admin": false, 00:10:36.091 "nvme_io": false, 00:10:36.091 "nvme_io_md": false, 00:10:36.091 "write_zeroes": true, 00:10:36.091 "zcopy": true, 00:10:36.091 "get_zone_info": false, 00:10:36.091 "zone_management": false, 00:10:36.091 "zone_append": false, 00:10:36.091 "compare": false, 00:10:36.091 "compare_and_write": false, 00:10:36.091 "abort": true, 00:10:36.091 "seek_hole": false, 00:10:36.091 "seek_data": false, 00:10:36.091 "copy": true, 00:10:36.091 "nvme_iov_md": false 00:10:36.091 }, 00:10:36.091 "memory_domains": [ 00:10:36.091 { 00:10:36.091 "dma_device_id": "system", 00:10:36.091 "dma_device_type": 1 00:10:36.091 }, 00:10:36.091 { 00:10:36.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.091 "dma_device_type": 2 00:10:36.091 } 00:10:36.091 ], 00:10:36.091 "driver_specific": {} 00:10:36.091 } 00:10:36.091 ] 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.091 "name": "Existed_Raid", 00:10:36.091 "uuid": "959df286-0e8a-4f5a-9320-038ad582184d", 00:10:36.091 "strip_size_kb": 64, 00:10:36.091 "state": "online", 00:10:36.091 "raid_level": "concat", 00:10:36.091 "superblock": false, 00:10:36.091 "num_base_bdevs": 2, 00:10:36.091 "num_base_bdevs_discovered": 2, 00:10:36.091 "num_base_bdevs_operational": 2, 00:10:36.091 "base_bdevs_list": [ 00:10:36.091 { 00:10:36.091 "name": "BaseBdev1", 00:10:36.091 "uuid": "ddea562b-d405-44ec-bc48-92524313e3b6", 00:10:36.091 "is_configured": true, 00:10:36.091 "data_offset": 0, 00:10:36.091 "data_size": 65536 00:10:36.091 }, 00:10:36.091 { 00:10:36.091 "name": "BaseBdev2", 00:10:36.091 "uuid": "d40745d0-949a-4c2c-81ff-1c062fab3d18", 00:10:36.091 "is_configured": true, 00:10:36.091 "data_offset": 0, 00:10:36.091 "data_size": 65536 00:10:36.091 } 00:10:36.091 ] 00:10:36.091 }' 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.091 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.351 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:36.351 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:36.351 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.351 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.351 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.351 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.351 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.351 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:36.351 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.351 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.351 [2024-12-06 15:37:19.626923] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.681 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.681 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.681 "name": "Existed_Raid", 00:10:36.681 "aliases": [ 00:10:36.681 "959df286-0e8a-4f5a-9320-038ad582184d" 00:10:36.681 ], 00:10:36.681 "product_name": "Raid Volume", 00:10:36.681 "block_size": 512, 00:10:36.681 "num_blocks": 131072, 00:10:36.681 "uuid": "959df286-0e8a-4f5a-9320-038ad582184d", 00:10:36.681 "assigned_rate_limits": { 00:10:36.681 "rw_ios_per_sec": 0, 00:10:36.681 "rw_mbytes_per_sec": 0, 00:10:36.681 "r_mbytes_per_sec": 0, 00:10:36.681 "w_mbytes_per_sec": 0 00:10:36.681 }, 00:10:36.681 "claimed": false, 00:10:36.681 "zoned": false, 00:10:36.681 "supported_io_types": { 00:10:36.681 "read": true, 00:10:36.681 "write": true, 00:10:36.681 "unmap": true, 00:10:36.681 "flush": true, 00:10:36.681 "reset": true, 00:10:36.681 "nvme_admin": false, 00:10:36.681 "nvme_io": false, 00:10:36.681 "nvme_io_md": false, 00:10:36.681 "write_zeroes": true, 00:10:36.681 "zcopy": false, 00:10:36.681 "get_zone_info": false, 00:10:36.681 "zone_management": false, 00:10:36.681 "zone_append": false, 00:10:36.681 "compare": false, 00:10:36.681 "compare_and_write": false, 00:10:36.681 "abort": false, 00:10:36.681 "seek_hole": false, 00:10:36.681 "seek_data": false, 00:10:36.681 "copy": false, 00:10:36.681 "nvme_iov_md": false 00:10:36.681 }, 00:10:36.681 "memory_domains": [ 00:10:36.681 { 00:10:36.681 "dma_device_id": "system", 00:10:36.681 "dma_device_type": 1 00:10:36.681 }, 00:10:36.681 { 00:10:36.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.681 "dma_device_type": 2 00:10:36.681 }, 00:10:36.681 { 00:10:36.681 "dma_device_id": "system", 00:10:36.681 "dma_device_type": 1 00:10:36.681 }, 00:10:36.681 { 00:10:36.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.681 "dma_device_type": 2 00:10:36.681 } 00:10:36.681 ], 00:10:36.681 "driver_specific": { 00:10:36.681 "raid": { 00:10:36.681 "uuid": "959df286-0e8a-4f5a-9320-038ad582184d", 00:10:36.681 "strip_size_kb": 64, 00:10:36.681 "state": "online", 00:10:36.681 "raid_level": "concat", 00:10:36.681 "superblock": false, 00:10:36.681 "num_base_bdevs": 2, 00:10:36.681 "num_base_bdevs_discovered": 2, 00:10:36.682 "num_base_bdevs_operational": 2, 00:10:36.682 "base_bdevs_list": [ 00:10:36.682 { 00:10:36.682 "name": "BaseBdev1", 00:10:36.682 "uuid": "ddea562b-d405-44ec-bc48-92524313e3b6", 00:10:36.682 "is_configured": true, 00:10:36.682 "data_offset": 0, 00:10:36.682 "data_size": 65536 00:10:36.682 }, 00:10:36.682 { 00:10:36.682 "name": "BaseBdev2", 00:10:36.682 "uuid": "d40745d0-949a-4c2c-81ff-1c062fab3d18", 00:10:36.682 "is_configured": true, 00:10:36.682 "data_offset": 0, 00:10:36.682 "data_size": 65536 00:10:36.682 } 00:10:36.682 ] 00:10:36.682 } 00:10:36.682 } 00:10:36.682 }' 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:36.682 BaseBdev2' 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.682 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.682 [2024-12-06 15:37:19.854359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:36.682 [2024-12-06 15:37:19.854414] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.682 [2024-12-06 15:37:19.854486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.944 15:37:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.944 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.944 "name": "Existed_Raid", 00:10:36.944 "uuid": "959df286-0e8a-4f5a-9320-038ad582184d", 00:10:36.944 "strip_size_kb": 64, 00:10:36.944 "state": "offline", 00:10:36.944 "raid_level": "concat", 00:10:36.944 "superblock": false, 00:10:36.944 "num_base_bdevs": 2, 00:10:36.944 "num_base_bdevs_discovered": 1, 00:10:36.944 "num_base_bdevs_operational": 1, 00:10:36.944 "base_bdevs_list": [ 00:10:36.944 { 00:10:36.944 "name": null, 00:10:36.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.944 "is_configured": false, 00:10:36.944 "data_offset": 0, 00:10:36.944 "data_size": 65536 00:10:36.944 }, 00:10:36.944 { 00:10:36.944 "name": "BaseBdev2", 00:10:36.944 "uuid": "d40745d0-949a-4c2c-81ff-1c062fab3d18", 00:10:36.944 "is_configured": true, 00:10:36.944 "data_offset": 0, 00:10:36.944 "data_size": 65536 00:10:36.944 } 00:10:36.944 ] 00:10:36.944 }' 00:10:36.944 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.944 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.204 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:37.204 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.204 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.204 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.204 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.204 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.204 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.204 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.204 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.204 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:37.204 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.204 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.204 [2024-12-06 15:37:20.407874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:37.204 [2024-12-06 15:37:20.407959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61725 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61725 ']' 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61725 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61725 00:10:37.464 killing process with pid 61725 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61725' 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61725 00:10:37.464 [2024-12-06 15:37:20.608639] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:37.464 15:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61725 00:10:37.464 [2024-12-06 15:37:20.628575] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.844 ************************************ 00:10:38.844 END TEST raid_state_function_test 00:10:38.844 ************************************ 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:38.844 00:10:38.844 real 0m5.126s 00:10:38.844 user 0m7.083s 00:10:38.844 sys 0m1.059s 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.844 15:37:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:38.844 15:37:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:38.844 15:37:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.844 15:37:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.844 ************************************ 00:10:38.844 START TEST raid_state_function_test_sb 00:10:38.844 ************************************ 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61973 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61973' 00:10:38.844 Process raid pid: 61973 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61973 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61973 ']' 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.844 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.844 [2024-12-06 15:37:22.089033] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:10:38.844 [2024-12-06 15:37:22.089222] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.104 [2024-12-06 15:37:22.273227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.363 [2024-12-06 15:37:22.428138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.622 [2024-12-06 15:37:22.700894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.622 [2024-12-06 15:37:22.700978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.882 [2024-12-06 15:37:22.971256] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.882 [2024-12-06 15:37:22.971360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.882 [2024-12-06 15:37:22.971386] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.882 [2024-12-06 15:37:22.971403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.882 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.882 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.882 "name": "Existed_Raid", 00:10:39.882 "uuid": "b64c973e-dc87-47ff-8d5d-fdc899f53b4a", 00:10:39.882 "strip_size_kb": 64, 00:10:39.882 "state": "configuring", 00:10:39.882 "raid_level": "concat", 00:10:39.882 "superblock": true, 00:10:39.882 "num_base_bdevs": 2, 00:10:39.882 "num_base_bdevs_discovered": 0, 00:10:39.882 "num_base_bdevs_operational": 2, 00:10:39.882 "base_bdevs_list": [ 00:10:39.882 { 00:10:39.882 "name": "BaseBdev1", 00:10:39.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.882 "is_configured": false, 00:10:39.882 "data_offset": 0, 00:10:39.882 "data_size": 0 00:10:39.882 }, 00:10:39.882 { 00:10:39.882 "name": "BaseBdev2", 00:10:39.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.882 "is_configured": false, 00:10:39.882 "data_offset": 0, 00:10:39.882 "data_size": 0 00:10:39.882 } 00:10:39.882 ] 00:10:39.882 }' 00:10:39.882 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.882 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.141 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.141 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.141 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.141 [2024-12-06 15:37:23.402699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.141 [2024-12-06 15:37:23.402769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:40.141 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.141 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:40.141 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.141 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.141 [2024-12-06 15:37:23.414702] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.141 [2024-12-06 15:37:23.414803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.141 [2024-12-06 15:37:23.414818] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.141 [2024-12-06 15:37:23.414837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.141 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.141 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:40.141 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.141 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.405 [2024-12-06 15:37:23.469341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.406 BaseBdev1 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.406 [ 00:10:40.406 { 00:10:40.406 "name": "BaseBdev1", 00:10:40.406 "aliases": [ 00:10:40.406 "9f6ef319-6429-4718-82b4-827160a0d5fc" 00:10:40.406 ], 00:10:40.406 "product_name": "Malloc disk", 00:10:40.406 "block_size": 512, 00:10:40.406 "num_blocks": 65536, 00:10:40.406 "uuid": "9f6ef319-6429-4718-82b4-827160a0d5fc", 00:10:40.406 "assigned_rate_limits": { 00:10:40.406 "rw_ios_per_sec": 0, 00:10:40.406 "rw_mbytes_per_sec": 0, 00:10:40.406 "r_mbytes_per_sec": 0, 00:10:40.406 "w_mbytes_per_sec": 0 00:10:40.406 }, 00:10:40.406 "claimed": true, 00:10:40.406 "claim_type": "exclusive_write", 00:10:40.406 "zoned": false, 00:10:40.406 "supported_io_types": { 00:10:40.406 "read": true, 00:10:40.406 "write": true, 00:10:40.406 "unmap": true, 00:10:40.406 "flush": true, 00:10:40.406 "reset": true, 00:10:40.406 "nvme_admin": false, 00:10:40.406 "nvme_io": false, 00:10:40.406 "nvme_io_md": false, 00:10:40.406 "write_zeroes": true, 00:10:40.406 "zcopy": true, 00:10:40.406 "get_zone_info": false, 00:10:40.406 "zone_management": false, 00:10:40.406 "zone_append": false, 00:10:40.406 "compare": false, 00:10:40.406 "compare_and_write": false, 00:10:40.406 "abort": true, 00:10:40.406 "seek_hole": false, 00:10:40.406 "seek_data": false, 00:10:40.406 "copy": true, 00:10:40.406 "nvme_iov_md": false 00:10:40.406 }, 00:10:40.406 "memory_domains": [ 00:10:40.406 { 00:10:40.406 "dma_device_id": "system", 00:10:40.406 "dma_device_type": 1 00:10:40.406 }, 00:10:40.406 { 00:10:40.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.406 "dma_device_type": 2 00:10:40.406 } 00:10:40.406 ], 00:10:40.406 "driver_specific": {} 00:10:40.406 } 00:10:40.406 ] 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.406 "name": "Existed_Raid", 00:10:40.406 "uuid": "235c67ae-e4e1-43bf-be39-1847092443cf", 00:10:40.406 "strip_size_kb": 64, 00:10:40.406 "state": "configuring", 00:10:40.406 "raid_level": "concat", 00:10:40.406 "superblock": true, 00:10:40.406 "num_base_bdevs": 2, 00:10:40.406 "num_base_bdevs_discovered": 1, 00:10:40.406 "num_base_bdevs_operational": 2, 00:10:40.406 "base_bdevs_list": [ 00:10:40.406 { 00:10:40.406 "name": "BaseBdev1", 00:10:40.406 "uuid": "9f6ef319-6429-4718-82b4-827160a0d5fc", 00:10:40.406 "is_configured": true, 00:10:40.406 "data_offset": 2048, 00:10:40.406 "data_size": 63488 00:10:40.406 }, 00:10:40.406 { 00:10:40.406 "name": "BaseBdev2", 00:10:40.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.406 "is_configured": false, 00:10:40.406 "data_offset": 0, 00:10:40.406 "data_size": 0 00:10:40.406 } 00:10:40.406 ] 00:10:40.406 }' 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.406 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.975 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.975 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.975 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.975 [2024-12-06 15:37:23.988765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.975 [2024-12-06 15:37:23.988853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:40.975 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.975 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:40.975 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.975 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.975 [2024-12-06 15:37:24.000856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.975 [2024-12-06 15:37:24.003554] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.975 [2024-12-06 15:37:24.003792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.975 "name": "Existed_Raid", 00:10:40.975 "uuid": "fba023a6-a103-452d-9574-341c833e4fa1", 00:10:40.975 "strip_size_kb": 64, 00:10:40.975 "state": "configuring", 00:10:40.975 "raid_level": "concat", 00:10:40.975 "superblock": true, 00:10:40.975 "num_base_bdevs": 2, 00:10:40.975 "num_base_bdevs_discovered": 1, 00:10:40.975 "num_base_bdevs_operational": 2, 00:10:40.975 "base_bdevs_list": [ 00:10:40.975 { 00:10:40.975 "name": "BaseBdev1", 00:10:40.975 "uuid": "9f6ef319-6429-4718-82b4-827160a0d5fc", 00:10:40.975 "is_configured": true, 00:10:40.975 "data_offset": 2048, 00:10:40.975 "data_size": 63488 00:10:40.975 }, 00:10:40.975 { 00:10:40.975 "name": "BaseBdev2", 00:10:40.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.975 "is_configured": false, 00:10:40.975 "data_offset": 0, 00:10:40.975 "data_size": 0 00:10:40.975 } 00:10:40.975 ] 00:10:40.975 }' 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.975 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.235 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:41.235 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.235 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.235 [2024-12-06 15:37:24.469742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.235 [2024-12-06 15:37:24.470524] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:41.235 [2024-12-06 15:37:24.470555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:41.235 [2024-12-06 15:37:24.471014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:41.235 [2024-12-06 15:37:24.471218] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:41.235 [2024-12-06 15:37:24.471236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:41.235 BaseBdev2 00:10:41.235 [2024-12-06 15:37:24.471416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.235 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.235 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:41.235 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:41.235 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.235 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.235 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.236 [ 00:10:41.236 { 00:10:41.236 "name": "BaseBdev2", 00:10:41.236 "aliases": [ 00:10:41.236 "3d128570-b2af-4b19-947c-47a750be64e6" 00:10:41.236 ], 00:10:41.236 "product_name": "Malloc disk", 00:10:41.236 "block_size": 512, 00:10:41.236 "num_blocks": 65536, 00:10:41.236 "uuid": "3d128570-b2af-4b19-947c-47a750be64e6", 00:10:41.236 "assigned_rate_limits": { 00:10:41.236 "rw_ios_per_sec": 0, 00:10:41.236 "rw_mbytes_per_sec": 0, 00:10:41.236 "r_mbytes_per_sec": 0, 00:10:41.236 "w_mbytes_per_sec": 0 00:10:41.236 }, 00:10:41.236 "claimed": true, 00:10:41.236 "claim_type": "exclusive_write", 00:10:41.236 "zoned": false, 00:10:41.236 "supported_io_types": { 00:10:41.236 "read": true, 00:10:41.236 "write": true, 00:10:41.236 "unmap": true, 00:10:41.236 "flush": true, 00:10:41.236 "reset": true, 00:10:41.236 "nvme_admin": false, 00:10:41.236 "nvme_io": false, 00:10:41.236 "nvme_io_md": false, 00:10:41.236 "write_zeroes": true, 00:10:41.236 "zcopy": true, 00:10:41.236 "get_zone_info": false, 00:10:41.236 "zone_management": false, 00:10:41.236 "zone_append": false, 00:10:41.236 "compare": false, 00:10:41.236 "compare_and_write": false, 00:10:41.236 "abort": true, 00:10:41.236 "seek_hole": false, 00:10:41.236 "seek_data": false, 00:10:41.236 "copy": true, 00:10:41.236 "nvme_iov_md": false 00:10:41.236 }, 00:10:41.236 "memory_domains": [ 00:10:41.236 { 00:10:41.236 "dma_device_id": "system", 00:10:41.236 "dma_device_type": 1 00:10:41.236 }, 00:10:41.236 { 00:10:41.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.236 "dma_device_type": 2 00:10:41.236 } 00:10:41.236 ], 00:10:41.236 "driver_specific": {} 00:10:41.236 } 00:10:41.236 ] 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.236 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.495 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.495 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.495 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.495 "name": "Existed_Raid", 00:10:41.495 "uuid": "fba023a6-a103-452d-9574-341c833e4fa1", 00:10:41.495 "strip_size_kb": 64, 00:10:41.495 "state": "online", 00:10:41.495 "raid_level": "concat", 00:10:41.495 "superblock": true, 00:10:41.495 "num_base_bdevs": 2, 00:10:41.495 "num_base_bdevs_discovered": 2, 00:10:41.495 "num_base_bdevs_operational": 2, 00:10:41.495 "base_bdevs_list": [ 00:10:41.495 { 00:10:41.495 "name": "BaseBdev1", 00:10:41.495 "uuid": "9f6ef319-6429-4718-82b4-827160a0d5fc", 00:10:41.495 "is_configured": true, 00:10:41.495 "data_offset": 2048, 00:10:41.495 "data_size": 63488 00:10:41.495 }, 00:10:41.495 { 00:10:41.495 "name": "BaseBdev2", 00:10:41.495 "uuid": "3d128570-b2af-4b19-947c-47a750be64e6", 00:10:41.495 "is_configured": true, 00:10:41.495 "data_offset": 2048, 00:10:41.495 "data_size": 63488 00:10:41.495 } 00:10:41.495 ] 00:10:41.495 }' 00:10:41.495 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.495 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.754 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:41.754 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:41.754 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.754 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.754 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.754 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.754 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:41.754 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.754 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.754 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.754 [2024-12-06 15:37:24.929583] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.754 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.754 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.754 "name": "Existed_Raid", 00:10:41.754 "aliases": [ 00:10:41.754 "fba023a6-a103-452d-9574-341c833e4fa1" 00:10:41.754 ], 00:10:41.754 "product_name": "Raid Volume", 00:10:41.754 "block_size": 512, 00:10:41.754 "num_blocks": 126976, 00:10:41.754 "uuid": "fba023a6-a103-452d-9574-341c833e4fa1", 00:10:41.754 "assigned_rate_limits": { 00:10:41.754 "rw_ios_per_sec": 0, 00:10:41.754 "rw_mbytes_per_sec": 0, 00:10:41.754 "r_mbytes_per_sec": 0, 00:10:41.754 "w_mbytes_per_sec": 0 00:10:41.754 }, 00:10:41.754 "claimed": false, 00:10:41.754 "zoned": false, 00:10:41.754 "supported_io_types": { 00:10:41.754 "read": true, 00:10:41.754 "write": true, 00:10:41.754 "unmap": true, 00:10:41.754 "flush": true, 00:10:41.754 "reset": true, 00:10:41.754 "nvme_admin": false, 00:10:41.754 "nvme_io": false, 00:10:41.754 "nvme_io_md": false, 00:10:41.754 "write_zeroes": true, 00:10:41.754 "zcopy": false, 00:10:41.754 "get_zone_info": false, 00:10:41.754 "zone_management": false, 00:10:41.754 "zone_append": false, 00:10:41.754 "compare": false, 00:10:41.754 "compare_and_write": false, 00:10:41.754 "abort": false, 00:10:41.754 "seek_hole": false, 00:10:41.754 "seek_data": false, 00:10:41.754 "copy": false, 00:10:41.754 "nvme_iov_md": false 00:10:41.754 }, 00:10:41.754 "memory_domains": [ 00:10:41.754 { 00:10:41.754 "dma_device_id": "system", 00:10:41.754 "dma_device_type": 1 00:10:41.754 }, 00:10:41.754 { 00:10:41.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.754 "dma_device_type": 2 00:10:41.754 }, 00:10:41.754 { 00:10:41.754 "dma_device_id": "system", 00:10:41.754 "dma_device_type": 1 00:10:41.754 }, 00:10:41.754 { 00:10:41.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.754 "dma_device_type": 2 00:10:41.754 } 00:10:41.754 ], 00:10:41.754 "driver_specific": { 00:10:41.754 "raid": { 00:10:41.754 "uuid": "fba023a6-a103-452d-9574-341c833e4fa1", 00:10:41.754 "strip_size_kb": 64, 00:10:41.754 "state": "online", 00:10:41.754 "raid_level": "concat", 00:10:41.754 "superblock": true, 00:10:41.754 "num_base_bdevs": 2, 00:10:41.754 "num_base_bdevs_discovered": 2, 00:10:41.754 "num_base_bdevs_operational": 2, 00:10:41.754 "base_bdevs_list": [ 00:10:41.754 { 00:10:41.754 "name": "BaseBdev1", 00:10:41.754 "uuid": "9f6ef319-6429-4718-82b4-827160a0d5fc", 00:10:41.754 "is_configured": true, 00:10:41.754 "data_offset": 2048, 00:10:41.754 "data_size": 63488 00:10:41.754 }, 00:10:41.754 { 00:10:41.754 "name": "BaseBdev2", 00:10:41.754 "uuid": "3d128570-b2af-4b19-947c-47a750be64e6", 00:10:41.754 "is_configured": true, 00:10:41.754 "data_offset": 2048, 00:10:41.754 "data_size": 63488 00:10:41.754 } 00:10:41.754 ] 00:10:41.754 } 00:10:41.754 } 00:10:41.754 }' 00:10:41.754 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.754 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:41.754 BaseBdev2' 00:10:41.754 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.754 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.754 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.754 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:41.754 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.754 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.754 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.013 [2024-12-06 15:37:25.141024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.013 [2024-12-06 15:37:25.141295] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.013 [2024-12-06 15:37:25.141416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.013 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.271 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.271 "name": "Existed_Raid", 00:10:42.271 "uuid": "fba023a6-a103-452d-9574-341c833e4fa1", 00:10:42.271 "strip_size_kb": 64, 00:10:42.271 "state": "offline", 00:10:42.271 "raid_level": "concat", 00:10:42.271 "superblock": true, 00:10:42.271 "num_base_bdevs": 2, 00:10:42.271 "num_base_bdevs_discovered": 1, 00:10:42.271 "num_base_bdevs_operational": 1, 00:10:42.271 "base_bdevs_list": [ 00:10:42.271 { 00:10:42.271 "name": null, 00:10:42.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.271 "is_configured": false, 00:10:42.271 "data_offset": 0, 00:10:42.271 "data_size": 63488 00:10:42.271 }, 00:10:42.271 { 00:10:42.271 "name": "BaseBdev2", 00:10:42.271 "uuid": "3d128570-b2af-4b19-947c-47a750be64e6", 00:10:42.271 "is_configured": true, 00:10:42.271 "data_offset": 2048, 00:10:42.271 "data_size": 63488 00:10:42.271 } 00:10:42.271 ] 00:10:42.271 }' 00:10:42.271 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.271 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.529 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:42.529 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:42.529 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.529 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:42.529 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.529 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.529 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.529 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:42.529 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:42.529 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:42.529 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.529 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.529 [2024-12-06 15:37:25.726768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.529 [2024-12-06 15:37:25.727071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61973 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61973 ']' 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61973 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61973 00:10:42.788 killing process with pid 61973 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61973' 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61973 00:10:42.788 [2024-12-06 15:37:25.929868] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.788 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61973 00:10:42.788 [2024-12-06 15:37:25.947668] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:44.172 15:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:44.172 00:10:44.172 real 0m5.285s 00:10:44.172 user 0m7.289s 00:10:44.172 sys 0m1.086s 00:10:44.172 15:37:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.172 15:37:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.172 ************************************ 00:10:44.172 END TEST raid_state_function_test_sb 00:10:44.172 ************************************ 00:10:44.172 15:37:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:44.172 15:37:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:44.172 15:37:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.172 15:37:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:44.172 ************************************ 00:10:44.172 START TEST raid_superblock_test 00:10:44.172 ************************************ 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62225 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62225 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62225 ']' 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.172 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.172 [2024-12-06 15:37:27.445582] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:10:44.172 [2024-12-06 15:37:27.445812] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62225 ] 00:10:44.431 [2024-12-06 15:37:27.655613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.691 [2024-12-06 15:37:27.806446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.950 [2024-12-06 15:37:28.056443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.950 [2024-12-06 15:37:28.056551] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.210 malloc1 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.210 [2024-12-06 15:37:28.372295] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:45.210 [2024-12-06 15:37:28.372694] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.210 [2024-12-06 15:37:28.372744] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:45.210 [2024-12-06 15:37:28.372759] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.210 [2024-12-06 15:37:28.375897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.210 [2024-12-06 15:37:28.375952] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:45.210 pt1 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.210 malloc2 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.210 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.211 [2024-12-06 15:37:28.433604] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:45.211 [2024-12-06 15:37:28.433706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.211 [2024-12-06 15:37:28.433750] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:45.211 [2024-12-06 15:37:28.433763] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.211 [2024-12-06 15:37:28.436793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.211 [2024-12-06 15:37:28.437007] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:45.211 pt2 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.211 [2024-12-06 15:37:28.445934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:45.211 [2024-12-06 15:37:28.448597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:45.211 [2024-12-06 15:37:28.448812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:45.211 [2024-12-06 15:37:28.448828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:45.211 [2024-12-06 15:37:28.449189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:45.211 [2024-12-06 15:37:28.449374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:45.211 [2024-12-06 15:37:28.449388] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:45.211 [2024-12-06 15:37:28.449624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.211 "name": "raid_bdev1", 00:10:45.211 "uuid": "ce5eab12-5fe5-4281-8114-92933aa262f2", 00:10:45.211 "strip_size_kb": 64, 00:10:45.211 "state": "online", 00:10:45.211 "raid_level": "concat", 00:10:45.211 "superblock": true, 00:10:45.211 "num_base_bdevs": 2, 00:10:45.211 "num_base_bdevs_discovered": 2, 00:10:45.211 "num_base_bdevs_operational": 2, 00:10:45.211 "base_bdevs_list": [ 00:10:45.211 { 00:10:45.211 "name": "pt1", 00:10:45.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.211 "is_configured": true, 00:10:45.211 "data_offset": 2048, 00:10:45.211 "data_size": 63488 00:10:45.211 }, 00:10:45.211 { 00:10:45.211 "name": "pt2", 00:10:45.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.211 "is_configured": true, 00:10:45.211 "data_offset": 2048, 00:10:45.211 "data_size": 63488 00:10:45.211 } 00:10:45.211 ] 00:10:45.211 }' 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.211 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.778 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:45.778 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:45.778 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.778 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.778 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.778 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.778 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:45.778 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.778 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.778 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.778 [2024-12-06 15:37:28.893612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.778 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.778 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.778 "name": "raid_bdev1", 00:10:45.778 "aliases": [ 00:10:45.778 "ce5eab12-5fe5-4281-8114-92933aa262f2" 00:10:45.778 ], 00:10:45.778 "product_name": "Raid Volume", 00:10:45.778 "block_size": 512, 00:10:45.778 "num_blocks": 126976, 00:10:45.778 "uuid": "ce5eab12-5fe5-4281-8114-92933aa262f2", 00:10:45.778 "assigned_rate_limits": { 00:10:45.778 "rw_ios_per_sec": 0, 00:10:45.778 "rw_mbytes_per_sec": 0, 00:10:45.778 "r_mbytes_per_sec": 0, 00:10:45.779 "w_mbytes_per_sec": 0 00:10:45.779 }, 00:10:45.779 "claimed": false, 00:10:45.779 "zoned": false, 00:10:45.779 "supported_io_types": { 00:10:45.779 "read": true, 00:10:45.779 "write": true, 00:10:45.779 "unmap": true, 00:10:45.779 "flush": true, 00:10:45.779 "reset": true, 00:10:45.779 "nvme_admin": false, 00:10:45.779 "nvme_io": false, 00:10:45.779 "nvme_io_md": false, 00:10:45.779 "write_zeroes": true, 00:10:45.779 "zcopy": false, 00:10:45.779 "get_zone_info": false, 00:10:45.779 "zone_management": false, 00:10:45.779 "zone_append": false, 00:10:45.779 "compare": false, 00:10:45.779 "compare_and_write": false, 00:10:45.779 "abort": false, 00:10:45.779 "seek_hole": false, 00:10:45.779 "seek_data": false, 00:10:45.779 "copy": false, 00:10:45.779 "nvme_iov_md": false 00:10:45.779 }, 00:10:45.779 "memory_domains": [ 00:10:45.779 { 00:10:45.779 "dma_device_id": "system", 00:10:45.779 "dma_device_type": 1 00:10:45.779 }, 00:10:45.779 { 00:10:45.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.779 "dma_device_type": 2 00:10:45.779 }, 00:10:45.779 { 00:10:45.779 "dma_device_id": "system", 00:10:45.779 "dma_device_type": 1 00:10:45.779 }, 00:10:45.779 { 00:10:45.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.779 "dma_device_type": 2 00:10:45.779 } 00:10:45.779 ], 00:10:45.779 "driver_specific": { 00:10:45.779 "raid": { 00:10:45.779 "uuid": "ce5eab12-5fe5-4281-8114-92933aa262f2", 00:10:45.779 "strip_size_kb": 64, 00:10:45.779 "state": "online", 00:10:45.779 "raid_level": "concat", 00:10:45.779 "superblock": true, 00:10:45.779 "num_base_bdevs": 2, 00:10:45.779 "num_base_bdevs_discovered": 2, 00:10:45.779 "num_base_bdevs_operational": 2, 00:10:45.779 "base_bdevs_list": [ 00:10:45.779 { 00:10:45.779 "name": "pt1", 00:10:45.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.779 "is_configured": true, 00:10:45.779 "data_offset": 2048, 00:10:45.779 "data_size": 63488 00:10:45.779 }, 00:10:45.779 { 00:10:45.779 "name": "pt2", 00:10:45.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.779 "is_configured": true, 00:10:45.779 "data_offset": 2048, 00:10:45.779 "data_size": 63488 00:10:45.779 } 00:10:45.779 ] 00:10:45.779 } 00:10:45.779 } 00:10:45.779 }' 00:10:45.779 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.779 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:45.779 pt2' 00:10:45.779 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.779 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.779 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.779 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:45.779 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.779 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.779 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.779 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:46.038 [2024-12-06 15:37:29.133204] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.038 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ce5eab12-5fe5-4281-8114-92933aa262f2 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ce5eab12-5fe5-4281-8114-92933aa262f2 ']' 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.039 [2024-12-06 15:37:29.172806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.039 [2024-12-06 15:37:29.172847] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.039 [2024-12-06 15:37:29.172972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.039 [2024-12-06 15:37:29.173037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.039 [2024-12-06 15:37:29.173054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.039 [2024-12-06 15:37:29.304740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:46.039 [2024-12-06 15:37:29.307301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:46.039 [2024-12-06 15:37:29.307385] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:46.039 [2024-12-06 15:37:29.307462] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:46.039 [2024-12-06 15:37:29.307481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.039 [2024-12-06 15:37:29.307496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:46.039 request: 00:10:46.039 { 00:10:46.039 "name": "raid_bdev1", 00:10:46.039 "raid_level": "concat", 00:10:46.039 "base_bdevs": [ 00:10:46.039 "malloc1", 00:10:46.039 "malloc2" 00:10:46.039 ], 00:10:46.039 "strip_size_kb": 64, 00:10:46.039 "superblock": false, 00:10:46.039 "method": "bdev_raid_create", 00:10:46.039 "req_id": 1 00:10:46.039 } 00:10:46.039 Got JSON-RPC error response 00:10:46.039 response: 00:10:46.039 { 00:10:46.039 "code": -17, 00:10:46.039 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:46.039 } 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.039 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.297 [2024-12-06 15:37:29.368625] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:46.297 [2024-12-06 15:37:29.368717] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.297 [2024-12-06 15:37:29.368742] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:46.297 [2024-12-06 15:37:29.368759] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.297 [2024-12-06 15:37:29.371787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.297 [2024-12-06 15:37:29.371839] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:46.297 [2024-12-06 15:37:29.371958] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:46.297 [2024-12-06 15:37:29.372035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:46.297 pt1 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.297 "name": "raid_bdev1", 00:10:46.297 "uuid": "ce5eab12-5fe5-4281-8114-92933aa262f2", 00:10:46.297 "strip_size_kb": 64, 00:10:46.297 "state": "configuring", 00:10:46.297 "raid_level": "concat", 00:10:46.297 "superblock": true, 00:10:46.297 "num_base_bdevs": 2, 00:10:46.297 "num_base_bdevs_discovered": 1, 00:10:46.297 "num_base_bdevs_operational": 2, 00:10:46.297 "base_bdevs_list": [ 00:10:46.297 { 00:10:46.297 "name": "pt1", 00:10:46.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.297 "is_configured": true, 00:10:46.297 "data_offset": 2048, 00:10:46.297 "data_size": 63488 00:10:46.297 }, 00:10:46.297 { 00:10:46.297 "name": null, 00:10:46.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.297 "is_configured": false, 00:10:46.297 "data_offset": 2048, 00:10:46.297 "data_size": 63488 00:10:46.297 } 00:10:46.297 ] 00:10:46.297 }' 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.297 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.556 [2024-12-06 15:37:29.831957] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:46.556 [2024-12-06 15:37:29.832220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.556 [2024-12-06 15:37:29.832290] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:46.556 [2024-12-06 15:37:29.832390] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.556 [2024-12-06 15:37:29.833031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.556 [2024-12-06 15:37:29.833072] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:46.556 [2024-12-06 15:37:29.833187] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:46.556 [2024-12-06 15:37:29.833223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:46.556 [2024-12-06 15:37:29.833371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:46.556 [2024-12-06 15:37:29.833385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:46.556 [2024-12-06 15:37:29.833702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:46.556 [2024-12-06 15:37:29.833867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:46.556 [2024-12-06 15:37:29.833877] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:46.556 [2024-12-06 15:37:29.834050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.556 pt2 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.556 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.815 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.815 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.815 "name": "raid_bdev1", 00:10:46.815 "uuid": "ce5eab12-5fe5-4281-8114-92933aa262f2", 00:10:46.815 "strip_size_kb": 64, 00:10:46.815 "state": "online", 00:10:46.815 "raid_level": "concat", 00:10:46.815 "superblock": true, 00:10:46.815 "num_base_bdevs": 2, 00:10:46.815 "num_base_bdevs_discovered": 2, 00:10:46.815 "num_base_bdevs_operational": 2, 00:10:46.815 "base_bdevs_list": [ 00:10:46.815 { 00:10:46.815 "name": "pt1", 00:10:46.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.815 "is_configured": true, 00:10:46.815 "data_offset": 2048, 00:10:46.815 "data_size": 63488 00:10:46.815 }, 00:10:46.815 { 00:10:46.815 "name": "pt2", 00:10:46.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.815 "is_configured": true, 00:10:46.815 "data_offset": 2048, 00:10:46.815 "data_size": 63488 00:10:46.815 } 00:10:46.815 ] 00:10:46.815 }' 00:10:46.815 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.815 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.074 [2024-12-06 15:37:30.247959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.074 "name": "raid_bdev1", 00:10:47.074 "aliases": [ 00:10:47.074 "ce5eab12-5fe5-4281-8114-92933aa262f2" 00:10:47.074 ], 00:10:47.074 "product_name": "Raid Volume", 00:10:47.074 "block_size": 512, 00:10:47.074 "num_blocks": 126976, 00:10:47.074 "uuid": "ce5eab12-5fe5-4281-8114-92933aa262f2", 00:10:47.074 "assigned_rate_limits": { 00:10:47.074 "rw_ios_per_sec": 0, 00:10:47.074 "rw_mbytes_per_sec": 0, 00:10:47.074 "r_mbytes_per_sec": 0, 00:10:47.074 "w_mbytes_per_sec": 0 00:10:47.074 }, 00:10:47.074 "claimed": false, 00:10:47.074 "zoned": false, 00:10:47.074 "supported_io_types": { 00:10:47.074 "read": true, 00:10:47.074 "write": true, 00:10:47.074 "unmap": true, 00:10:47.074 "flush": true, 00:10:47.074 "reset": true, 00:10:47.074 "nvme_admin": false, 00:10:47.074 "nvme_io": false, 00:10:47.074 "nvme_io_md": false, 00:10:47.074 "write_zeroes": true, 00:10:47.074 "zcopy": false, 00:10:47.074 "get_zone_info": false, 00:10:47.074 "zone_management": false, 00:10:47.074 "zone_append": false, 00:10:47.074 "compare": false, 00:10:47.074 "compare_and_write": false, 00:10:47.074 "abort": false, 00:10:47.074 "seek_hole": false, 00:10:47.074 "seek_data": false, 00:10:47.074 "copy": false, 00:10:47.074 "nvme_iov_md": false 00:10:47.074 }, 00:10:47.074 "memory_domains": [ 00:10:47.074 { 00:10:47.074 "dma_device_id": "system", 00:10:47.074 "dma_device_type": 1 00:10:47.074 }, 00:10:47.074 { 00:10:47.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.074 "dma_device_type": 2 00:10:47.074 }, 00:10:47.074 { 00:10:47.074 "dma_device_id": "system", 00:10:47.074 "dma_device_type": 1 00:10:47.074 }, 00:10:47.074 { 00:10:47.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.074 "dma_device_type": 2 00:10:47.074 } 00:10:47.074 ], 00:10:47.074 "driver_specific": { 00:10:47.074 "raid": { 00:10:47.074 "uuid": "ce5eab12-5fe5-4281-8114-92933aa262f2", 00:10:47.074 "strip_size_kb": 64, 00:10:47.074 "state": "online", 00:10:47.074 "raid_level": "concat", 00:10:47.074 "superblock": true, 00:10:47.074 "num_base_bdevs": 2, 00:10:47.074 "num_base_bdevs_discovered": 2, 00:10:47.074 "num_base_bdevs_operational": 2, 00:10:47.074 "base_bdevs_list": [ 00:10:47.074 { 00:10:47.074 "name": "pt1", 00:10:47.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:47.074 "is_configured": true, 00:10:47.074 "data_offset": 2048, 00:10:47.074 "data_size": 63488 00:10:47.074 }, 00:10:47.074 { 00:10:47.074 "name": "pt2", 00:10:47.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.074 "is_configured": true, 00:10:47.074 "data_offset": 2048, 00:10:47.074 "data_size": 63488 00:10:47.074 } 00:10:47.074 ] 00:10:47.074 } 00:10:47.074 } 00:10:47.074 }' 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:47.074 pt2' 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.074 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:47.334 [2024-12-06 15:37:30.455966] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ce5eab12-5fe5-4281-8114-92933aa262f2 '!=' ce5eab12-5fe5-4281-8114-92933aa262f2 ']' 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62225 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62225 ']' 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62225 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62225 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62225' 00:10:47.334 killing process with pid 62225 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62225 00:10:47.334 [2024-12-06 15:37:30.525699] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:47.334 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62225 00:10:47.334 [2024-12-06 15:37:30.525825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.334 [2024-12-06 15:37:30.525891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.334 [2024-12-06 15:37:30.525907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:47.593 [2024-12-06 15:37:30.759535] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.977 15:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:48.977 00:10:48.977 real 0m4.710s 00:10:48.977 user 0m6.379s 00:10:48.977 sys 0m1.029s 00:10:48.977 15:37:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.977 15:37:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.977 ************************************ 00:10:48.977 END TEST raid_superblock_test 00:10:48.977 ************************************ 00:10:48.977 15:37:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:10:48.977 15:37:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:48.977 15:37:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.977 15:37:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.977 ************************************ 00:10:48.977 START TEST raid_read_error_test 00:10:48.977 ************************************ 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cra3OILHFJ 00:10:48.977 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62442 00:10:48.978 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:48.978 15:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62442 00:10:48.978 15:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62442 ']' 00:10:48.978 15:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.978 15:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.978 15:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.978 15:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.978 15:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.978 [2024-12-06 15:37:32.225985] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:10:48.978 [2024-12-06 15:37:32.226408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62442 ] 00:10:49.236 [2024-12-06 15:37:32.415164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.496 [2024-12-06 15:37:32.560632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.754 [2024-12-06 15:37:32.795993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.754 [2024-12-06 15:37:32.796065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.014 BaseBdev1_malloc 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.014 true 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.014 [2024-12-06 15:37:33.153974] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:50.014 [2024-12-06 15:37:33.154264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.014 [2024-12-06 15:37:33.154305] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:50.014 [2024-12-06 15:37:33.154322] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.014 [2024-12-06 15:37:33.157300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.014 [2024-12-06 15:37:33.157473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:50.014 BaseBdev1 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.014 BaseBdev2_malloc 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.014 true 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.014 [2024-12-06 15:37:33.218976] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:50.014 [2024-12-06 15:37:33.219054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.014 [2024-12-06 15:37:33.219077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:50.014 [2024-12-06 15:37:33.219092] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.014 [2024-12-06 15:37:33.221900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.014 [2024-12-06 15:37:33.221948] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:50.014 BaseBdev2 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.014 [2024-12-06 15:37:33.227057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.014 [2024-12-06 15:37:33.229747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.014 [2024-12-06 15:37:33.229981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:50.014 [2024-12-06 15:37:33.230010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:50.014 [2024-12-06 15:37:33.230324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:50.014 [2024-12-06 15:37:33.230727] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:50.014 [2024-12-06 15:37:33.230780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:50.014 [2024-12-06 15:37:33.231074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.014 "name": "raid_bdev1", 00:10:50.014 "uuid": "f1889483-ffe6-41fa-91ed-e78421f715cd", 00:10:50.014 "strip_size_kb": 64, 00:10:50.014 "state": "online", 00:10:50.014 "raid_level": "concat", 00:10:50.014 "superblock": true, 00:10:50.014 "num_base_bdevs": 2, 00:10:50.014 "num_base_bdevs_discovered": 2, 00:10:50.014 "num_base_bdevs_operational": 2, 00:10:50.014 "base_bdevs_list": [ 00:10:50.014 { 00:10:50.014 "name": "BaseBdev1", 00:10:50.014 "uuid": "5870cc3c-2a98-5121-bcfd-67d8e49a26c6", 00:10:50.014 "is_configured": true, 00:10:50.014 "data_offset": 2048, 00:10:50.014 "data_size": 63488 00:10:50.014 }, 00:10:50.014 { 00:10:50.014 "name": "BaseBdev2", 00:10:50.014 "uuid": "df3f8b07-04e1-5bf2-b0a4-1d7bb8e09608", 00:10:50.014 "is_configured": true, 00:10:50.014 "data_offset": 2048, 00:10:50.014 "data_size": 63488 00:10:50.014 } 00:10:50.014 ] 00:10:50.014 }' 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.014 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.583 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:50.583 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:50.583 [2024-12-06 15:37:33.732079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:51.516 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:51.516 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.516 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.516 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.516 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:51.516 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:51.516 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:51.516 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:51.516 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.516 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.516 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.516 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.516 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:51.517 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.517 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.517 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.517 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.517 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.517 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.517 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.517 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.517 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.517 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.517 "name": "raid_bdev1", 00:10:51.517 "uuid": "f1889483-ffe6-41fa-91ed-e78421f715cd", 00:10:51.517 "strip_size_kb": 64, 00:10:51.517 "state": "online", 00:10:51.517 "raid_level": "concat", 00:10:51.517 "superblock": true, 00:10:51.517 "num_base_bdevs": 2, 00:10:51.517 "num_base_bdevs_discovered": 2, 00:10:51.517 "num_base_bdevs_operational": 2, 00:10:51.517 "base_bdevs_list": [ 00:10:51.517 { 00:10:51.517 "name": "BaseBdev1", 00:10:51.517 "uuid": "5870cc3c-2a98-5121-bcfd-67d8e49a26c6", 00:10:51.517 "is_configured": true, 00:10:51.517 "data_offset": 2048, 00:10:51.517 "data_size": 63488 00:10:51.517 }, 00:10:51.517 { 00:10:51.517 "name": "BaseBdev2", 00:10:51.517 "uuid": "df3f8b07-04e1-5bf2-b0a4-1d7bb8e09608", 00:10:51.517 "is_configured": true, 00:10:51.517 "data_offset": 2048, 00:10:51.517 "data_size": 63488 00:10:51.517 } 00:10:51.517 ] 00:10:51.517 }' 00:10:51.517 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.517 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.774 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:51.774 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.774 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.774 [2024-12-06 15:37:35.065559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:51.774 [2024-12-06 15:37:35.065605] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.031 [2024-12-06 15:37:35.068462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.031 [2024-12-06 15:37:35.068692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.031 [2024-12-06 15:37:35.068747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.031 [2024-12-06 15:37:35.068769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:52.031 { 00:10:52.031 "results": [ 00:10:52.031 { 00:10:52.031 "job": "raid_bdev1", 00:10:52.031 "core_mask": "0x1", 00:10:52.031 "workload": "randrw", 00:10:52.031 "percentage": 50, 00:10:52.031 "status": "finished", 00:10:52.031 "queue_depth": 1, 00:10:52.031 "io_size": 131072, 00:10:52.031 "runtime": 1.333314, 00:10:52.031 "iops": 13356.19366480814, 00:10:52.031 "mibps": 1669.5242081010174, 00:10:52.031 "io_failed": 1, 00:10:52.031 "io_timeout": 0, 00:10:52.031 "avg_latency_us": 105.02958131588626, 00:10:52.031 "min_latency_us": 27.759036144578314, 00:10:52.031 "max_latency_us": 1460.7421686746989 00:10:52.031 } 00:10:52.031 ], 00:10:52.031 "core_count": 1 00:10:52.031 } 00:10:52.031 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.031 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62442 00:10:52.031 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62442 ']' 00:10:52.031 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62442 00:10:52.031 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:52.031 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.031 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62442 00:10:52.031 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.031 killing process with pid 62442 00:10:52.031 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.031 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62442' 00:10:52.031 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62442 00:10:52.031 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62442 00:10:52.031 [2024-12-06 15:37:35.122812] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:52.031 [2024-12-06 15:37:35.277972] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.406 15:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cra3OILHFJ 00:10:53.406 15:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:53.406 15:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:53.406 15:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:53.406 15:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:53.406 ************************************ 00:10:53.406 END TEST raid_read_error_test 00:10:53.406 ************************************ 00:10:53.406 15:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:53.406 15:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:53.406 15:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:53.406 00:10:53.406 real 0m4.533s 00:10:53.406 user 0m5.210s 00:10:53.406 sys 0m0.732s 00:10:53.406 15:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.406 15:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.406 15:37:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:10:53.406 15:37:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:53.406 15:37:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.664 15:37:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.664 ************************************ 00:10:53.664 START TEST raid_write_error_test 00:10:53.664 ************************************ 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:53.664 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:53.665 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uq5BcBnx7P 00:10:53.665 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62582 00:10:53.665 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62582 00:10:53.665 15:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:53.665 15:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62582 ']' 00:10:53.665 15:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.665 15:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.665 15:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.665 15:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.665 15:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.665 [2024-12-06 15:37:36.840686] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:10:53.665 [2024-12-06 15:37:36.840851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62582 ] 00:10:53.923 [2024-12-06 15:37:37.028153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.923 [2024-12-06 15:37:37.177867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.180 [2024-12-06 15:37:37.421483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.180 [2024-12-06 15:37:37.421599] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.747 BaseBdev1_malloc 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.747 true 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.747 [2024-12-06 15:37:37.804859] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:54.747 [2024-12-06 15:37:37.804964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.747 [2024-12-06 15:37:37.804999] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:54.747 [2024-12-06 15:37:37.805016] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.747 [2024-12-06 15:37:37.808062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.747 [2024-12-06 15:37:37.808129] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:54.747 BaseBdev1 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.747 BaseBdev2_malloc 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.747 true 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.747 [2024-12-06 15:37:37.886675] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:54.747 [2024-12-06 15:37:37.886784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.747 [2024-12-06 15:37:37.886813] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:54.747 [2024-12-06 15:37:37.886831] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.747 [2024-12-06 15:37:37.889876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.747 [2024-12-06 15:37:37.889936] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:54.747 BaseBdev2 00:10:54.747 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.748 [2024-12-06 15:37:37.898782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.748 [2024-12-06 15:37:37.901386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.748 [2024-12-06 15:37:37.901832] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:54.748 [2024-12-06 15:37:37.901858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:54.748 [2024-12-06 15:37:37.902275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:54.748 [2024-12-06 15:37:37.902509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:54.748 [2024-12-06 15:37:37.902545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:54.748 [2024-12-06 15:37:37.902846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.748 "name": "raid_bdev1", 00:10:54.748 "uuid": "e98a1bdb-2568-477c-af2e-d141881e4829", 00:10:54.748 "strip_size_kb": 64, 00:10:54.748 "state": "online", 00:10:54.748 "raid_level": "concat", 00:10:54.748 "superblock": true, 00:10:54.748 "num_base_bdevs": 2, 00:10:54.748 "num_base_bdevs_discovered": 2, 00:10:54.748 "num_base_bdevs_operational": 2, 00:10:54.748 "base_bdevs_list": [ 00:10:54.748 { 00:10:54.748 "name": "BaseBdev1", 00:10:54.748 "uuid": "d0c41e35-487a-5771-a3a7-640b54547932", 00:10:54.748 "is_configured": true, 00:10:54.748 "data_offset": 2048, 00:10:54.748 "data_size": 63488 00:10:54.748 }, 00:10:54.748 { 00:10:54.748 "name": "BaseBdev2", 00:10:54.748 "uuid": "ff83a871-99c3-5876-88ab-40e31651c23b", 00:10:54.748 "is_configured": true, 00:10:54.748 "data_offset": 2048, 00:10:54.748 "data_size": 63488 00:10:54.748 } 00:10:54.748 ] 00:10:54.748 }' 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.748 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.006 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:55.006 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:55.265 [2024-12-06 15:37:38.395725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.202 "name": "raid_bdev1", 00:10:56.202 "uuid": "e98a1bdb-2568-477c-af2e-d141881e4829", 00:10:56.202 "strip_size_kb": 64, 00:10:56.202 "state": "online", 00:10:56.202 "raid_level": "concat", 00:10:56.202 "superblock": true, 00:10:56.202 "num_base_bdevs": 2, 00:10:56.202 "num_base_bdevs_discovered": 2, 00:10:56.202 "num_base_bdevs_operational": 2, 00:10:56.202 "base_bdevs_list": [ 00:10:56.202 { 00:10:56.202 "name": "BaseBdev1", 00:10:56.202 "uuid": "d0c41e35-487a-5771-a3a7-640b54547932", 00:10:56.202 "is_configured": true, 00:10:56.202 "data_offset": 2048, 00:10:56.202 "data_size": 63488 00:10:56.202 }, 00:10:56.202 { 00:10:56.202 "name": "BaseBdev2", 00:10:56.202 "uuid": "ff83a871-99c3-5876-88ab-40e31651c23b", 00:10:56.202 "is_configured": true, 00:10:56.202 "data_offset": 2048, 00:10:56.202 "data_size": 63488 00:10:56.202 } 00:10:56.202 ] 00:10:56.202 }' 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.202 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.461 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:56.461 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.461 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.461 [2024-12-06 15:37:39.713436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:56.461 [2024-12-06 15:37:39.713490] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.461 [2024-12-06 15:37:39.716269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.461 [2024-12-06 15:37:39.716331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.461 [2024-12-06 15:37:39.716372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.461 [2024-12-06 15:37:39.716389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:56.461 { 00:10:56.461 "results": [ 00:10:56.461 { 00:10:56.461 "job": "raid_bdev1", 00:10:56.461 "core_mask": "0x1", 00:10:56.461 "workload": "randrw", 00:10:56.461 "percentage": 50, 00:10:56.461 "status": "finished", 00:10:56.461 "queue_depth": 1, 00:10:56.461 "io_size": 131072, 00:10:56.461 "runtime": 1.317375, 00:10:56.461 "iops": 13091.185121928076, 00:10:56.461 "mibps": 1636.3981402410095, 00:10:56.461 "io_failed": 1, 00:10:56.461 "io_timeout": 0, 00:10:56.461 "avg_latency_us": 107.36023274404512, 00:10:56.461 "min_latency_us": 27.759036144578314, 00:10:56.461 "max_latency_us": 1579.1807228915663 00:10:56.461 } 00:10:56.461 ], 00:10:56.461 "core_count": 1 00:10:56.461 } 00:10:56.461 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.461 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62582 00:10:56.461 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62582 ']' 00:10:56.461 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62582 00:10:56.461 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:56.461 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.461 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62582 00:10:56.720 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.720 killing process with pid 62582 00:10:56.720 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.720 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62582' 00:10:56.720 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62582 00:10:56.720 [2024-12-06 15:37:39.768102] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.720 15:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62582 00:10:56.720 [2024-12-06 15:37:39.923461] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.105 15:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uq5BcBnx7P 00:10:58.105 15:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:58.105 15:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:58.105 15:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:10:58.105 15:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:58.105 15:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.105 15:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:58.105 ************************************ 00:10:58.105 END TEST raid_write_error_test 00:10:58.105 ************************************ 00:10:58.105 15:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:10:58.105 00:10:58.105 real 0m4.554s 00:10:58.105 user 0m5.272s 00:10:58.105 sys 0m0.723s 00:10:58.105 15:37:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.105 15:37:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.105 15:37:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:58.105 15:37:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:58.105 15:37:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:58.105 15:37:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.105 15:37:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.105 ************************************ 00:10:58.105 START TEST raid_state_function_test 00:10:58.105 ************************************ 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62726 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:58.105 Process raid pid: 62726 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62726' 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62726 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62726 ']' 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.105 15:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.363 [2024-12-06 15:37:41.472983] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:10:58.363 [2024-12-06 15:37:41.473924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.620 [2024-12-06 15:37:41.680643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.620 [2024-12-06 15:37:41.832145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.878 [2024-12-06 15:37:42.089527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.878 [2024-12-06 15:37:42.089856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.137 [2024-12-06 15:37:42.362740] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:59.137 [2024-12-06 15:37:42.362826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:59.137 [2024-12-06 15:37:42.362840] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:59.137 [2024-12-06 15:37:42.362855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.137 "name": "Existed_Raid", 00:10:59.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.137 "strip_size_kb": 0, 00:10:59.137 "state": "configuring", 00:10:59.137 "raid_level": "raid1", 00:10:59.137 "superblock": false, 00:10:59.137 "num_base_bdevs": 2, 00:10:59.137 "num_base_bdevs_discovered": 0, 00:10:59.137 "num_base_bdevs_operational": 2, 00:10:59.137 "base_bdevs_list": [ 00:10:59.137 { 00:10:59.137 "name": "BaseBdev1", 00:10:59.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.137 "is_configured": false, 00:10:59.137 "data_offset": 0, 00:10:59.137 "data_size": 0 00:10:59.137 }, 00:10:59.137 { 00:10:59.137 "name": "BaseBdev2", 00:10:59.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.137 "is_configured": false, 00:10:59.137 "data_offset": 0, 00:10:59.137 "data_size": 0 00:10:59.137 } 00:10:59.137 ] 00:10:59.137 }' 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.137 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.705 [2024-12-06 15:37:42.782184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:59.705 [2024-12-06 15:37:42.782390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.705 [2024-12-06 15:37:42.794136] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:59.705 [2024-12-06 15:37:42.794190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:59.705 [2024-12-06 15:37:42.794202] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:59.705 [2024-12-06 15:37:42.794220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.705 [2024-12-06 15:37:42.847568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.705 BaseBdev1 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.705 [ 00:10:59.705 { 00:10:59.705 "name": "BaseBdev1", 00:10:59.705 "aliases": [ 00:10:59.705 "d92594c4-41d1-40b2-b463-15d87548c35e" 00:10:59.705 ], 00:10:59.705 "product_name": "Malloc disk", 00:10:59.705 "block_size": 512, 00:10:59.705 "num_blocks": 65536, 00:10:59.705 "uuid": "d92594c4-41d1-40b2-b463-15d87548c35e", 00:10:59.705 "assigned_rate_limits": { 00:10:59.705 "rw_ios_per_sec": 0, 00:10:59.705 "rw_mbytes_per_sec": 0, 00:10:59.705 "r_mbytes_per_sec": 0, 00:10:59.705 "w_mbytes_per_sec": 0 00:10:59.705 }, 00:10:59.705 "claimed": true, 00:10:59.705 "claim_type": "exclusive_write", 00:10:59.705 "zoned": false, 00:10:59.705 "supported_io_types": { 00:10:59.705 "read": true, 00:10:59.705 "write": true, 00:10:59.705 "unmap": true, 00:10:59.705 "flush": true, 00:10:59.705 "reset": true, 00:10:59.705 "nvme_admin": false, 00:10:59.705 "nvme_io": false, 00:10:59.705 "nvme_io_md": false, 00:10:59.705 "write_zeroes": true, 00:10:59.705 "zcopy": true, 00:10:59.705 "get_zone_info": false, 00:10:59.705 "zone_management": false, 00:10:59.705 "zone_append": false, 00:10:59.705 "compare": false, 00:10:59.705 "compare_and_write": false, 00:10:59.705 "abort": true, 00:10:59.705 "seek_hole": false, 00:10:59.705 "seek_data": false, 00:10:59.705 "copy": true, 00:10:59.705 "nvme_iov_md": false 00:10:59.705 }, 00:10:59.705 "memory_domains": [ 00:10:59.705 { 00:10:59.705 "dma_device_id": "system", 00:10:59.705 "dma_device_type": 1 00:10:59.705 }, 00:10:59.705 { 00:10:59.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.705 "dma_device_type": 2 00:10:59.705 } 00:10:59.705 ], 00:10:59.705 "driver_specific": {} 00:10:59.705 } 00:10:59.705 ] 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.705 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.705 "name": "Existed_Raid", 00:10:59.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.705 "strip_size_kb": 0, 00:10:59.706 "state": "configuring", 00:10:59.706 "raid_level": "raid1", 00:10:59.706 "superblock": false, 00:10:59.706 "num_base_bdevs": 2, 00:10:59.706 "num_base_bdevs_discovered": 1, 00:10:59.706 "num_base_bdevs_operational": 2, 00:10:59.706 "base_bdevs_list": [ 00:10:59.706 { 00:10:59.706 "name": "BaseBdev1", 00:10:59.706 "uuid": "d92594c4-41d1-40b2-b463-15d87548c35e", 00:10:59.706 "is_configured": true, 00:10:59.706 "data_offset": 0, 00:10:59.706 "data_size": 65536 00:10:59.706 }, 00:10:59.706 { 00:10:59.706 "name": "BaseBdev2", 00:10:59.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.706 "is_configured": false, 00:10:59.706 "data_offset": 0, 00:10:59.706 "data_size": 0 00:10:59.706 } 00:10:59.706 ] 00:10:59.706 }' 00:10:59.706 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.706 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.273 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:00.273 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.273 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.273 [2024-12-06 15:37:43.326943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:00.273 [2024-12-06 15:37:43.327016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:00.273 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.273 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:00.273 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.273 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.274 [2024-12-06 15:37:43.338973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.274 [2024-12-06 15:37:43.341416] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:00.274 [2024-12-06 15:37:43.341474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.274 "name": "Existed_Raid", 00:11:00.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.274 "strip_size_kb": 0, 00:11:00.274 "state": "configuring", 00:11:00.274 "raid_level": "raid1", 00:11:00.274 "superblock": false, 00:11:00.274 "num_base_bdevs": 2, 00:11:00.274 "num_base_bdevs_discovered": 1, 00:11:00.274 "num_base_bdevs_operational": 2, 00:11:00.274 "base_bdevs_list": [ 00:11:00.274 { 00:11:00.274 "name": "BaseBdev1", 00:11:00.274 "uuid": "d92594c4-41d1-40b2-b463-15d87548c35e", 00:11:00.274 "is_configured": true, 00:11:00.274 "data_offset": 0, 00:11:00.274 "data_size": 65536 00:11:00.274 }, 00:11:00.274 { 00:11:00.274 "name": "BaseBdev2", 00:11:00.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.274 "is_configured": false, 00:11:00.274 "data_offset": 0, 00:11:00.274 "data_size": 0 00:11:00.274 } 00:11:00.274 ] 00:11:00.274 }' 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.274 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.533 [2024-12-06 15:37:43.808303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.533 [2024-12-06 15:37:43.808647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:00.533 [2024-12-06 15:37:43.808698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:00.533 [2024-12-06 15:37:43.809123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:00.533 [2024-12-06 15:37:43.809467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:00.533 [2024-12-06 15:37:43.809491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:00.533 [2024-12-06 15:37:43.809837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.533 BaseBdev2 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.533 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.793 [ 00:11:00.793 { 00:11:00.793 "name": "BaseBdev2", 00:11:00.793 "aliases": [ 00:11:00.793 "39a58898-8dd1-4e63-ad2f-1d32fbd5548e" 00:11:00.793 ], 00:11:00.793 "product_name": "Malloc disk", 00:11:00.793 "block_size": 512, 00:11:00.793 "num_blocks": 65536, 00:11:00.793 "uuid": "39a58898-8dd1-4e63-ad2f-1d32fbd5548e", 00:11:00.793 "assigned_rate_limits": { 00:11:00.793 "rw_ios_per_sec": 0, 00:11:00.793 "rw_mbytes_per_sec": 0, 00:11:00.793 "r_mbytes_per_sec": 0, 00:11:00.793 "w_mbytes_per_sec": 0 00:11:00.793 }, 00:11:00.793 "claimed": true, 00:11:00.793 "claim_type": "exclusive_write", 00:11:00.793 "zoned": false, 00:11:00.793 "supported_io_types": { 00:11:00.793 "read": true, 00:11:00.793 "write": true, 00:11:00.793 "unmap": true, 00:11:00.793 "flush": true, 00:11:00.793 "reset": true, 00:11:00.793 "nvme_admin": false, 00:11:00.793 "nvme_io": false, 00:11:00.793 "nvme_io_md": false, 00:11:00.793 "write_zeroes": true, 00:11:00.793 "zcopy": true, 00:11:00.793 "get_zone_info": false, 00:11:00.793 "zone_management": false, 00:11:00.793 "zone_append": false, 00:11:00.793 "compare": false, 00:11:00.793 "compare_and_write": false, 00:11:00.793 "abort": true, 00:11:00.793 "seek_hole": false, 00:11:00.793 "seek_data": false, 00:11:00.793 "copy": true, 00:11:00.793 "nvme_iov_md": false 00:11:00.793 }, 00:11:00.793 "memory_domains": [ 00:11:00.793 { 00:11:00.793 "dma_device_id": "system", 00:11:00.793 "dma_device_type": 1 00:11:00.793 }, 00:11:00.793 { 00:11:00.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.793 "dma_device_type": 2 00:11:00.793 } 00:11:00.793 ], 00:11:00.793 "driver_specific": {} 00:11:00.793 } 00:11:00.793 ] 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.793 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.794 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.794 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.794 "name": "Existed_Raid", 00:11:00.794 "uuid": "210e6f82-b77a-4832-9e40-797e7859f42a", 00:11:00.794 "strip_size_kb": 0, 00:11:00.794 "state": "online", 00:11:00.794 "raid_level": "raid1", 00:11:00.794 "superblock": false, 00:11:00.794 "num_base_bdevs": 2, 00:11:00.794 "num_base_bdevs_discovered": 2, 00:11:00.794 "num_base_bdevs_operational": 2, 00:11:00.794 "base_bdevs_list": [ 00:11:00.794 { 00:11:00.794 "name": "BaseBdev1", 00:11:00.794 "uuid": "d92594c4-41d1-40b2-b463-15d87548c35e", 00:11:00.794 "is_configured": true, 00:11:00.794 "data_offset": 0, 00:11:00.794 "data_size": 65536 00:11:00.794 }, 00:11:00.794 { 00:11:00.794 "name": "BaseBdev2", 00:11:00.794 "uuid": "39a58898-8dd1-4e63-ad2f-1d32fbd5548e", 00:11:00.794 "is_configured": true, 00:11:00.794 "data_offset": 0, 00:11:00.794 "data_size": 65536 00:11:00.794 } 00:11:00.794 ] 00:11:00.794 }' 00:11:00.794 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.794 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.053 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:01.053 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:01.053 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.053 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.053 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.053 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.053 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:01.053 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.053 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.053 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.053 [2024-12-06 15:37:44.299973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.053 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.053 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.053 "name": "Existed_Raid", 00:11:01.053 "aliases": [ 00:11:01.053 "210e6f82-b77a-4832-9e40-797e7859f42a" 00:11:01.053 ], 00:11:01.053 "product_name": "Raid Volume", 00:11:01.053 "block_size": 512, 00:11:01.053 "num_blocks": 65536, 00:11:01.053 "uuid": "210e6f82-b77a-4832-9e40-797e7859f42a", 00:11:01.053 "assigned_rate_limits": { 00:11:01.053 "rw_ios_per_sec": 0, 00:11:01.053 "rw_mbytes_per_sec": 0, 00:11:01.053 "r_mbytes_per_sec": 0, 00:11:01.053 "w_mbytes_per_sec": 0 00:11:01.053 }, 00:11:01.053 "claimed": false, 00:11:01.053 "zoned": false, 00:11:01.053 "supported_io_types": { 00:11:01.053 "read": true, 00:11:01.053 "write": true, 00:11:01.053 "unmap": false, 00:11:01.053 "flush": false, 00:11:01.053 "reset": true, 00:11:01.053 "nvme_admin": false, 00:11:01.053 "nvme_io": false, 00:11:01.053 "nvme_io_md": false, 00:11:01.053 "write_zeroes": true, 00:11:01.053 "zcopy": false, 00:11:01.053 "get_zone_info": false, 00:11:01.053 "zone_management": false, 00:11:01.053 "zone_append": false, 00:11:01.053 "compare": false, 00:11:01.053 "compare_and_write": false, 00:11:01.053 "abort": false, 00:11:01.053 "seek_hole": false, 00:11:01.053 "seek_data": false, 00:11:01.053 "copy": false, 00:11:01.053 "nvme_iov_md": false 00:11:01.053 }, 00:11:01.053 "memory_domains": [ 00:11:01.053 { 00:11:01.053 "dma_device_id": "system", 00:11:01.053 "dma_device_type": 1 00:11:01.053 }, 00:11:01.053 { 00:11:01.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.053 "dma_device_type": 2 00:11:01.053 }, 00:11:01.053 { 00:11:01.053 "dma_device_id": "system", 00:11:01.053 "dma_device_type": 1 00:11:01.053 }, 00:11:01.053 { 00:11:01.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.053 "dma_device_type": 2 00:11:01.053 } 00:11:01.053 ], 00:11:01.053 "driver_specific": { 00:11:01.053 "raid": { 00:11:01.053 "uuid": "210e6f82-b77a-4832-9e40-797e7859f42a", 00:11:01.053 "strip_size_kb": 0, 00:11:01.053 "state": "online", 00:11:01.053 "raid_level": "raid1", 00:11:01.053 "superblock": false, 00:11:01.053 "num_base_bdevs": 2, 00:11:01.053 "num_base_bdevs_discovered": 2, 00:11:01.053 "num_base_bdevs_operational": 2, 00:11:01.053 "base_bdevs_list": [ 00:11:01.053 { 00:11:01.053 "name": "BaseBdev1", 00:11:01.053 "uuid": "d92594c4-41d1-40b2-b463-15d87548c35e", 00:11:01.053 "is_configured": true, 00:11:01.053 "data_offset": 0, 00:11:01.053 "data_size": 65536 00:11:01.053 }, 00:11:01.053 { 00:11:01.053 "name": "BaseBdev2", 00:11:01.053 "uuid": "39a58898-8dd1-4e63-ad2f-1d32fbd5548e", 00:11:01.053 "is_configured": true, 00:11:01.053 "data_offset": 0, 00:11:01.053 "data_size": 65536 00:11:01.053 } 00:11:01.053 ] 00:11:01.053 } 00:11:01.053 } 00:11:01.053 }' 00:11:01.053 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:01.313 BaseBdev2' 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.313 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 [2024-12-06 15:37:44.515746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.571 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.572 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.572 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.572 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.572 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.572 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.572 "name": "Existed_Raid", 00:11:01.572 "uuid": "210e6f82-b77a-4832-9e40-797e7859f42a", 00:11:01.572 "strip_size_kb": 0, 00:11:01.572 "state": "online", 00:11:01.572 "raid_level": "raid1", 00:11:01.572 "superblock": false, 00:11:01.572 "num_base_bdevs": 2, 00:11:01.572 "num_base_bdevs_discovered": 1, 00:11:01.572 "num_base_bdevs_operational": 1, 00:11:01.572 "base_bdevs_list": [ 00:11:01.572 { 00:11:01.572 "name": null, 00:11:01.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.572 "is_configured": false, 00:11:01.572 "data_offset": 0, 00:11:01.572 "data_size": 65536 00:11:01.572 }, 00:11:01.572 { 00:11:01.572 "name": "BaseBdev2", 00:11:01.572 "uuid": "39a58898-8dd1-4e63-ad2f-1d32fbd5548e", 00:11:01.572 "is_configured": true, 00:11:01.572 "data_offset": 0, 00:11:01.572 "data_size": 65536 00:11:01.572 } 00:11:01.572 ] 00:11:01.572 }' 00:11:01.572 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.572 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.830 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:01.830 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.830 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.830 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.830 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.830 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:01.830 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.830 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:01.830 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:01.830 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:01.830 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.830 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.830 [2024-12-06 15:37:45.096829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:01.830 [2024-12-06 15:37:45.096961] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.088 [2024-12-06 15:37:45.204363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.088 [2024-12-06 15:37:45.204449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.088 [2024-12-06 15:37:45.204466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62726 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62726 ']' 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62726 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62726 00:11:02.088 killing process with pid 62726 00:11:02.088 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.089 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.089 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62726' 00:11:02.089 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62726 00:11:02.089 [2024-12-06 15:37:45.284319] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.089 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62726 00:11:02.089 [2024-12-06 15:37:45.302047] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:03.467 00:11:03.467 real 0m5.185s 00:11:03.467 user 0m7.267s 00:11:03.467 sys 0m1.020s 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.467 ************************************ 00:11:03.467 END TEST raid_state_function_test 00:11:03.467 ************************************ 00:11:03.467 15:37:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:11:03.467 15:37:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:03.467 15:37:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.467 15:37:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.467 ************************************ 00:11:03.467 START TEST raid_state_function_test_sb 00:11:03.467 ************************************ 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:03.467 Process raid pid: 62979 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62979 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62979' 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62979 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62979 ']' 00:11:03.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.467 15:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.467 [2024-12-06 15:37:46.735332] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:11:03.467 [2024-12-06 15:37:46.735755] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.726 [2024-12-06 15:37:46.922394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.985 [2024-12-06 15:37:47.064674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.245 [2024-12-06 15:37:47.313498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.245 [2024-12-06 15:37:47.313850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.505 [2024-12-06 15:37:47.593307] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.505 [2024-12-06 15:37:47.593573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.505 [2024-12-06 15:37:47.593673] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.505 [2024-12-06 15:37:47.593721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.505 "name": "Existed_Raid", 00:11:04.505 "uuid": "ce50d9dd-2ae3-4ab5-bde1-31be4f4926a1", 00:11:04.505 "strip_size_kb": 0, 00:11:04.505 "state": "configuring", 00:11:04.505 "raid_level": "raid1", 00:11:04.505 "superblock": true, 00:11:04.505 "num_base_bdevs": 2, 00:11:04.505 "num_base_bdevs_discovered": 0, 00:11:04.505 "num_base_bdevs_operational": 2, 00:11:04.505 "base_bdevs_list": [ 00:11:04.505 { 00:11:04.505 "name": "BaseBdev1", 00:11:04.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.505 "is_configured": false, 00:11:04.505 "data_offset": 0, 00:11:04.505 "data_size": 0 00:11:04.505 }, 00:11:04.505 { 00:11:04.505 "name": "BaseBdev2", 00:11:04.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.505 "is_configured": false, 00:11:04.505 "data_offset": 0, 00:11:04.505 "data_size": 0 00:11:04.505 } 00:11:04.505 ] 00:11:04.505 }' 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.505 15:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.765 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.765 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.765 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.765 [2024-12-06 15:37:48.032718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.765 [2024-12-06 15:37:48.032766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:04.765 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.765 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:04.765 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.765 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.765 [2024-12-06 15:37:48.044713] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.765 [2024-12-06 15:37:48.044765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.765 [2024-12-06 15:37:48.044777] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.765 [2024-12-06 15:37:48.044794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.765 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.765 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:04.765 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.765 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.024 [2024-12-06 15:37:48.102575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.024 BaseBdev1 00:11:05.024 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.024 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:05.024 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:05.024 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.024 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.024 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.024 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.024 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.024 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.024 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.024 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.024 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:05.024 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.024 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.024 [ 00:11:05.024 { 00:11:05.024 "name": "BaseBdev1", 00:11:05.024 "aliases": [ 00:11:05.024 "b1d531d4-d6f1-4d76-a912-24f55517257f" 00:11:05.024 ], 00:11:05.025 "product_name": "Malloc disk", 00:11:05.025 "block_size": 512, 00:11:05.025 "num_blocks": 65536, 00:11:05.025 "uuid": "b1d531d4-d6f1-4d76-a912-24f55517257f", 00:11:05.025 "assigned_rate_limits": { 00:11:05.025 "rw_ios_per_sec": 0, 00:11:05.025 "rw_mbytes_per_sec": 0, 00:11:05.025 "r_mbytes_per_sec": 0, 00:11:05.025 "w_mbytes_per_sec": 0 00:11:05.025 }, 00:11:05.025 "claimed": true, 00:11:05.025 "claim_type": "exclusive_write", 00:11:05.025 "zoned": false, 00:11:05.025 "supported_io_types": { 00:11:05.025 "read": true, 00:11:05.025 "write": true, 00:11:05.025 "unmap": true, 00:11:05.025 "flush": true, 00:11:05.025 "reset": true, 00:11:05.025 "nvme_admin": false, 00:11:05.025 "nvme_io": false, 00:11:05.025 "nvme_io_md": false, 00:11:05.025 "write_zeroes": true, 00:11:05.025 "zcopy": true, 00:11:05.025 "get_zone_info": false, 00:11:05.025 "zone_management": false, 00:11:05.025 "zone_append": false, 00:11:05.025 "compare": false, 00:11:05.025 "compare_and_write": false, 00:11:05.025 "abort": true, 00:11:05.025 "seek_hole": false, 00:11:05.025 "seek_data": false, 00:11:05.025 "copy": true, 00:11:05.025 "nvme_iov_md": false 00:11:05.025 }, 00:11:05.025 "memory_domains": [ 00:11:05.025 { 00:11:05.025 "dma_device_id": "system", 00:11:05.025 "dma_device_type": 1 00:11:05.025 }, 00:11:05.025 { 00:11:05.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.025 "dma_device_type": 2 00:11:05.025 } 00:11:05.025 ], 00:11:05.025 "driver_specific": {} 00:11:05.025 } 00:11:05.025 ] 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.025 "name": "Existed_Raid", 00:11:05.025 "uuid": "6a914339-0f7b-4b71-ac1e-a81254ec3027", 00:11:05.025 "strip_size_kb": 0, 00:11:05.025 "state": "configuring", 00:11:05.025 "raid_level": "raid1", 00:11:05.025 "superblock": true, 00:11:05.025 "num_base_bdevs": 2, 00:11:05.025 "num_base_bdevs_discovered": 1, 00:11:05.025 "num_base_bdevs_operational": 2, 00:11:05.025 "base_bdevs_list": [ 00:11:05.025 { 00:11:05.025 "name": "BaseBdev1", 00:11:05.025 "uuid": "b1d531d4-d6f1-4d76-a912-24f55517257f", 00:11:05.025 "is_configured": true, 00:11:05.025 "data_offset": 2048, 00:11:05.025 "data_size": 63488 00:11:05.025 }, 00:11:05.025 { 00:11:05.025 "name": "BaseBdev2", 00:11:05.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.025 "is_configured": false, 00:11:05.025 "data_offset": 0, 00:11:05.025 "data_size": 0 00:11:05.025 } 00:11:05.025 ] 00:11:05.025 }' 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.025 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.284 [2024-12-06 15:37:48.558116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.284 [2024-12-06 15:37:48.558333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.284 [2024-12-06 15:37:48.570161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.284 [2024-12-06 15:37:48.572636] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.284 [2024-12-06 15:37:48.572801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.284 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.544 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.544 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.544 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.544 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.544 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.544 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.544 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.544 "name": "Existed_Raid", 00:11:05.544 "uuid": "fd9cbc94-9670-458d-a326-f4f79b12a347", 00:11:05.544 "strip_size_kb": 0, 00:11:05.544 "state": "configuring", 00:11:05.544 "raid_level": "raid1", 00:11:05.544 "superblock": true, 00:11:05.544 "num_base_bdevs": 2, 00:11:05.544 "num_base_bdevs_discovered": 1, 00:11:05.544 "num_base_bdevs_operational": 2, 00:11:05.544 "base_bdevs_list": [ 00:11:05.544 { 00:11:05.544 "name": "BaseBdev1", 00:11:05.544 "uuid": "b1d531d4-d6f1-4d76-a912-24f55517257f", 00:11:05.544 "is_configured": true, 00:11:05.544 "data_offset": 2048, 00:11:05.544 "data_size": 63488 00:11:05.544 }, 00:11:05.544 { 00:11:05.544 "name": "BaseBdev2", 00:11:05.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.544 "is_configured": false, 00:11:05.544 "data_offset": 0, 00:11:05.544 "data_size": 0 00:11:05.544 } 00:11:05.544 ] 00:11:05.544 }' 00:11:05.544 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.544 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.847 15:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.847 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.847 15:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.847 [2024-12-06 15:37:49.032641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.847 [2024-12-06 15:37:49.032978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:05.847 [2024-12-06 15:37:49.032999] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:05.847 [2024-12-06 15:37:49.033314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:05.847 BaseBdev2 00:11:05.847 [2024-12-06 15:37:49.033498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:05.847 [2024-12-06 15:37:49.033534] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:05.847 [2024-12-06 15:37:49.033700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.847 [ 00:11:05.847 { 00:11:05.847 "name": "BaseBdev2", 00:11:05.847 "aliases": [ 00:11:05.847 "179aa2b9-443b-43c3-81c3-f99c7ee81a42" 00:11:05.847 ], 00:11:05.847 "product_name": "Malloc disk", 00:11:05.847 "block_size": 512, 00:11:05.847 "num_blocks": 65536, 00:11:05.847 "uuid": "179aa2b9-443b-43c3-81c3-f99c7ee81a42", 00:11:05.847 "assigned_rate_limits": { 00:11:05.847 "rw_ios_per_sec": 0, 00:11:05.847 "rw_mbytes_per_sec": 0, 00:11:05.847 "r_mbytes_per_sec": 0, 00:11:05.847 "w_mbytes_per_sec": 0 00:11:05.847 }, 00:11:05.847 "claimed": true, 00:11:05.847 "claim_type": "exclusive_write", 00:11:05.847 "zoned": false, 00:11:05.847 "supported_io_types": { 00:11:05.847 "read": true, 00:11:05.847 "write": true, 00:11:05.847 "unmap": true, 00:11:05.847 "flush": true, 00:11:05.847 "reset": true, 00:11:05.847 "nvme_admin": false, 00:11:05.847 "nvme_io": false, 00:11:05.847 "nvme_io_md": false, 00:11:05.847 "write_zeroes": true, 00:11:05.847 "zcopy": true, 00:11:05.847 "get_zone_info": false, 00:11:05.847 "zone_management": false, 00:11:05.847 "zone_append": false, 00:11:05.847 "compare": false, 00:11:05.847 "compare_and_write": false, 00:11:05.847 "abort": true, 00:11:05.847 "seek_hole": false, 00:11:05.847 "seek_data": false, 00:11:05.847 "copy": true, 00:11:05.847 "nvme_iov_md": false 00:11:05.847 }, 00:11:05.847 "memory_domains": [ 00:11:05.847 { 00:11:05.847 "dma_device_id": "system", 00:11:05.847 "dma_device_type": 1 00:11:05.847 }, 00:11:05.847 { 00:11:05.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.847 "dma_device_type": 2 00:11:05.847 } 00:11:05.847 ], 00:11:05.847 "driver_specific": {} 00:11:05.847 } 00:11:05.847 ] 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.847 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:05.848 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.848 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.848 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.848 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.848 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.848 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.848 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.848 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.848 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.133 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.133 "name": "Existed_Raid", 00:11:06.133 "uuid": "fd9cbc94-9670-458d-a326-f4f79b12a347", 00:11:06.133 "strip_size_kb": 0, 00:11:06.133 "state": "online", 00:11:06.133 "raid_level": "raid1", 00:11:06.133 "superblock": true, 00:11:06.133 "num_base_bdevs": 2, 00:11:06.133 "num_base_bdevs_discovered": 2, 00:11:06.133 "num_base_bdevs_operational": 2, 00:11:06.133 "base_bdevs_list": [ 00:11:06.133 { 00:11:06.133 "name": "BaseBdev1", 00:11:06.133 "uuid": "b1d531d4-d6f1-4d76-a912-24f55517257f", 00:11:06.133 "is_configured": true, 00:11:06.133 "data_offset": 2048, 00:11:06.133 "data_size": 63488 00:11:06.133 }, 00:11:06.133 { 00:11:06.133 "name": "BaseBdev2", 00:11:06.133 "uuid": "179aa2b9-443b-43c3-81c3-f99c7ee81a42", 00:11:06.133 "is_configured": true, 00:11:06.133 "data_offset": 2048, 00:11:06.133 "data_size": 63488 00:11:06.133 } 00:11:06.133 ] 00:11:06.133 }' 00:11:06.133 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.133 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.393 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:06.393 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:06.393 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:06.393 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:06.393 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:06.393 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:06.393 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:06.393 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:06.393 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.393 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.393 [2024-12-06 15:37:49.448968] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.393 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.393 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:06.393 "name": "Existed_Raid", 00:11:06.393 "aliases": [ 00:11:06.393 "fd9cbc94-9670-458d-a326-f4f79b12a347" 00:11:06.393 ], 00:11:06.393 "product_name": "Raid Volume", 00:11:06.393 "block_size": 512, 00:11:06.393 "num_blocks": 63488, 00:11:06.393 "uuid": "fd9cbc94-9670-458d-a326-f4f79b12a347", 00:11:06.393 "assigned_rate_limits": { 00:11:06.393 "rw_ios_per_sec": 0, 00:11:06.393 "rw_mbytes_per_sec": 0, 00:11:06.393 "r_mbytes_per_sec": 0, 00:11:06.393 "w_mbytes_per_sec": 0 00:11:06.393 }, 00:11:06.393 "claimed": false, 00:11:06.394 "zoned": false, 00:11:06.394 "supported_io_types": { 00:11:06.394 "read": true, 00:11:06.394 "write": true, 00:11:06.394 "unmap": false, 00:11:06.394 "flush": false, 00:11:06.394 "reset": true, 00:11:06.394 "nvme_admin": false, 00:11:06.394 "nvme_io": false, 00:11:06.394 "nvme_io_md": false, 00:11:06.394 "write_zeroes": true, 00:11:06.394 "zcopy": false, 00:11:06.394 "get_zone_info": false, 00:11:06.394 "zone_management": false, 00:11:06.394 "zone_append": false, 00:11:06.394 "compare": false, 00:11:06.394 "compare_and_write": false, 00:11:06.394 "abort": false, 00:11:06.394 "seek_hole": false, 00:11:06.394 "seek_data": false, 00:11:06.394 "copy": false, 00:11:06.394 "nvme_iov_md": false 00:11:06.394 }, 00:11:06.394 "memory_domains": [ 00:11:06.394 { 00:11:06.394 "dma_device_id": "system", 00:11:06.394 "dma_device_type": 1 00:11:06.394 }, 00:11:06.394 { 00:11:06.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.394 "dma_device_type": 2 00:11:06.394 }, 00:11:06.394 { 00:11:06.394 "dma_device_id": "system", 00:11:06.394 "dma_device_type": 1 00:11:06.394 }, 00:11:06.394 { 00:11:06.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.394 "dma_device_type": 2 00:11:06.394 } 00:11:06.394 ], 00:11:06.394 "driver_specific": { 00:11:06.394 "raid": { 00:11:06.394 "uuid": "fd9cbc94-9670-458d-a326-f4f79b12a347", 00:11:06.394 "strip_size_kb": 0, 00:11:06.394 "state": "online", 00:11:06.394 "raid_level": "raid1", 00:11:06.394 "superblock": true, 00:11:06.394 "num_base_bdevs": 2, 00:11:06.394 "num_base_bdevs_discovered": 2, 00:11:06.394 "num_base_bdevs_operational": 2, 00:11:06.394 "base_bdevs_list": [ 00:11:06.394 { 00:11:06.394 "name": "BaseBdev1", 00:11:06.394 "uuid": "b1d531d4-d6f1-4d76-a912-24f55517257f", 00:11:06.394 "is_configured": true, 00:11:06.394 "data_offset": 2048, 00:11:06.394 "data_size": 63488 00:11:06.394 }, 00:11:06.394 { 00:11:06.394 "name": "BaseBdev2", 00:11:06.394 "uuid": "179aa2b9-443b-43c3-81c3-f99c7ee81a42", 00:11:06.394 "is_configured": true, 00:11:06.394 "data_offset": 2048, 00:11:06.394 "data_size": 63488 00:11:06.394 } 00:11:06.394 ] 00:11:06.394 } 00:11:06.394 } 00:11:06.394 }' 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:06.394 BaseBdev2' 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.394 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.394 [2024-12-06 15:37:49.680744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.654 "name": "Existed_Raid", 00:11:06.654 "uuid": "fd9cbc94-9670-458d-a326-f4f79b12a347", 00:11:06.654 "strip_size_kb": 0, 00:11:06.654 "state": "online", 00:11:06.654 "raid_level": "raid1", 00:11:06.654 "superblock": true, 00:11:06.654 "num_base_bdevs": 2, 00:11:06.654 "num_base_bdevs_discovered": 1, 00:11:06.654 "num_base_bdevs_operational": 1, 00:11:06.654 "base_bdevs_list": [ 00:11:06.654 { 00:11:06.654 "name": null, 00:11:06.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.654 "is_configured": false, 00:11:06.654 "data_offset": 0, 00:11:06.654 "data_size": 63488 00:11:06.654 }, 00:11:06.654 { 00:11:06.654 "name": "BaseBdev2", 00:11:06.654 "uuid": "179aa2b9-443b-43c3-81c3-f99c7ee81a42", 00:11:06.654 "is_configured": true, 00:11:06.654 "data_offset": 2048, 00:11:06.654 "data_size": 63488 00:11:06.654 } 00:11:06.654 ] 00:11:06.654 }' 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.654 15:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.223 [2024-12-06 15:37:50.278786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:07.223 [2024-12-06 15:37:50.279092] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.223 [2024-12-06 15:37:50.384715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.223 [2024-12-06 15:37:50.384797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.223 [2024-12-06 15:37:50.384815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62979 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62979 ']' 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62979 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62979 00:11:07.223 killing process with pid 62979 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62979' 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62979 00:11:07.223 [2024-12-06 15:37:50.479813] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.223 15:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62979 00:11:07.223 [2024-12-06 15:37:50.497959] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.604 ************************************ 00:11:08.604 END TEST raid_state_function_test_sb 00:11:08.604 ************************************ 00:11:08.604 15:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:08.604 00:11:08.604 real 0m5.134s 00:11:08.604 user 0m7.129s 00:11:08.604 sys 0m1.054s 00:11:08.604 15:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.604 15:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.604 15:37:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:11:08.604 15:37:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:08.604 15:37:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.604 15:37:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.604 ************************************ 00:11:08.604 START TEST raid_superblock_test 00:11:08.604 ************************************ 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63231 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63231 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63231 ']' 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:08.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.604 15:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.864 [2024-12-06 15:37:51.932201] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:11:08.864 [2024-12-06 15:37:51.932580] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63231 ] 00:11:08.864 [2024-12-06 15:37:52.104308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.123 [2024-12-06 15:37:52.251150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.381 [2024-12-06 15:37:52.502635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.381 [2024-12-06 15:37:52.502950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.640 malloc1 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.640 [2024-12-06 15:37:52.830934] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:09.640 [2024-12-06 15:37:52.831017] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.640 [2024-12-06 15:37:52.831046] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:09.640 [2024-12-06 15:37:52.831058] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.640 [2024-12-06 15:37:52.833851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.640 [2024-12-06 15:37:52.833895] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:09.640 pt1 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:09.640 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.641 malloc2 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.641 [2024-12-06 15:37:52.892004] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:09.641 [2024-12-06 15:37:52.892085] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.641 [2024-12-06 15:37:52.892122] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:09.641 [2024-12-06 15:37:52.892134] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.641 [2024-12-06 15:37:52.894969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.641 [2024-12-06 15:37:52.895015] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:09.641 pt2 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.641 [2024-12-06 15:37:52.900044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:09.641 [2024-12-06 15:37:52.902581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:09.641 [2024-12-06 15:37:52.902755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:09.641 [2024-12-06 15:37:52.902776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:09.641 [2024-12-06 15:37:52.903067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:09.641 [2024-12-06 15:37:52.903242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:09.641 [2024-12-06 15:37:52.903262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:09.641 [2024-12-06 15:37:52.903444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.641 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.899 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.899 "name": "raid_bdev1", 00:11:09.899 "uuid": "253c1b2c-20ac-4fef-849e-f4cb12c1110d", 00:11:09.899 "strip_size_kb": 0, 00:11:09.899 "state": "online", 00:11:09.899 "raid_level": "raid1", 00:11:09.899 "superblock": true, 00:11:09.899 "num_base_bdevs": 2, 00:11:09.899 "num_base_bdevs_discovered": 2, 00:11:09.899 "num_base_bdevs_operational": 2, 00:11:09.899 "base_bdevs_list": [ 00:11:09.899 { 00:11:09.899 "name": "pt1", 00:11:09.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.899 "is_configured": true, 00:11:09.899 "data_offset": 2048, 00:11:09.899 "data_size": 63488 00:11:09.899 }, 00:11:09.899 { 00:11:09.899 "name": "pt2", 00:11:09.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.899 "is_configured": true, 00:11:09.899 "data_offset": 2048, 00:11:09.899 "data_size": 63488 00:11:09.899 } 00:11:09.899 ] 00:11:09.899 }' 00:11:09.899 15:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.899 15:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.158 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:10.158 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:10.158 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:10.158 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:10.158 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:10.158 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:10.158 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:10.158 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:10.158 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.158 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.158 [2024-12-06 15:37:53.335738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.158 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.158 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:10.158 "name": "raid_bdev1", 00:11:10.158 "aliases": [ 00:11:10.158 "253c1b2c-20ac-4fef-849e-f4cb12c1110d" 00:11:10.158 ], 00:11:10.158 "product_name": "Raid Volume", 00:11:10.158 "block_size": 512, 00:11:10.158 "num_blocks": 63488, 00:11:10.158 "uuid": "253c1b2c-20ac-4fef-849e-f4cb12c1110d", 00:11:10.158 "assigned_rate_limits": { 00:11:10.158 "rw_ios_per_sec": 0, 00:11:10.158 "rw_mbytes_per_sec": 0, 00:11:10.158 "r_mbytes_per_sec": 0, 00:11:10.158 "w_mbytes_per_sec": 0 00:11:10.158 }, 00:11:10.158 "claimed": false, 00:11:10.158 "zoned": false, 00:11:10.158 "supported_io_types": { 00:11:10.158 "read": true, 00:11:10.158 "write": true, 00:11:10.158 "unmap": false, 00:11:10.158 "flush": false, 00:11:10.158 "reset": true, 00:11:10.158 "nvme_admin": false, 00:11:10.158 "nvme_io": false, 00:11:10.158 "nvme_io_md": false, 00:11:10.158 "write_zeroes": true, 00:11:10.158 "zcopy": false, 00:11:10.158 "get_zone_info": false, 00:11:10.158 "zone_management": false, 00:11:10.158 "zone_append": false, 00:11:10.158 "compare": false, 00:11:10.158 "compare_and_write": false, 00:11:10.158 "abort": false, 00:11:10.158 "seek_hole": false, 00:11:10.158 "seek_data": false, 00:11:10.158 "copy": false, 00:11:10.158 "nvme_iov_md": false 00:11:10.158 }, 00:11:10.158 "memory_domains": [ 00:11:10.158 { 00:11:10.158 "dma_device_id": "system", 00:11:10.158 "dma_device_type": 1 00:11:10.158 }, 00:11:10.158 { 00:11:10.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.158 "dma_device_type": 2 00:11:10.158 }, 00:11:10.158 { 00:11:10.158 "dma_device_id": "system", 00:11:10.158 "dma_device_type": 1 00:11:10.158 }, 00:11:10.158 { 00:11:10.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.158 "dma_device_type": 2 00:11:10.158 } 00:11:10.158 ], 00:11:10.158 "driver_specific": { 00:11:10.158 "raid": { 00:11:10.158 "uuid": "253c1b2c-20ac-4fef-849e-f4cb12c1110d", 00:11:10.158 "strip_size_kb": 0, 00:11:10.158 "state": "online", 00:11:10.158 "raid_level": "raid1", 00:11:10.158 "superblock": true, 00:11:10.158 "num_base_bdevs": 2, 00:11:10.158 "num_base_bdevs_discovered": 2, 00:11:10.158 "num_base_bdevs_operational": 2, 00:11:10.158 "base_bdevs_list": [ 00:11:10.158 { 00:11:10.158 "name": "pt1", 00:11:10.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.158 "is_configured": true, 00:11:10.158 "data_offset": 2048, 00:11:10.158 "data_size": 63488 00:11:10.158 }, 00:11:10.158 { 00:11:10.158 "name": "pt2", 00:11:10.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.158 "is_configured": true, 00:11:10.158 "data_offset": 2048, 00:11:10.158 "data_size": 63488 00:11:10.158 } 00:11:10.158 ] 00:11:10.158 } 00:11:10.158 } 00:11:10.158 }' 00:11:10.158 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.158 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:10.158 pt2' 00:11:10.158 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.417 [2024-12-06 15:37:53.559393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=253c1b2c-20ac-4fef-849e-f4cb12c1110d 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 253c1b2c-20ac-4fef-849e-f4cb12c1110d ']' 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.417 [2024-12-06 15:37:53.598987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.417 [2024-12-06 15:37:53.599124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.417 [2024-12-06 15:37:53.599261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.417 [2024-12-06 15:37:53.599337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.417 [2024-12-06 15:37:53.599355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.417 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.676 [2024-12-06 15:37:53.714849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:10.676 [2024-12-06 15:37:53.717342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:10.676 [2024-12-06 15:37:53.717419] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:10.676 [2024-12-06 15:37:53.717486] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:10.676 [2024-12-06 15:37:53.717515] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.676 [2024-12-06 15:37:53.717538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:10.676 request: 00:11:10.676 { 00:11:10.676 "name": "raid_bdev1", 00:11:10.676 "raid_level": "raid1", 00:11:10.676 "base_bdevs": [ 00:11:10.676 "malloc1", 00:11:10.676 "malloc2" 00:11:10.676 ], 00:11:10.676 "superblock": false, 00:11:10.676 "method": "bdev_raid_create", 00:11:10.676 "req_id": 1 00:11:10.676 } 00:11:10.676 Got JSON-RPC error response 00:11:10.676 response: 00:11:10.676 { 00:11:10.676 "code": -17, 00:11:10.676 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:10.676 } 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.676 [2024-12-06 15:37:53.770740] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:10.676 [2024-12-06 15:37:53.770816] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.676 [2024-12-06 15:37:53.770842] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:10.676 [2024-12-06 15:37:53.770858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.676 [2024-12-06 15:37:53.773731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.676 [2024-12-06 15:37:53.773772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:10.676 [2024-12-06 15:37:53.773873] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:10.676 [2024-12-06 15:37:53.773944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:10.676 pt1 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.676 "name": "raid_bdev1", 00:11:10.676 "uuid": "253c1b2c-20ac-4fef-849e-f4cb12c1110d", 00:11:10.676 "strip_size_kb": 0, 00:11:10.676 "state": "configuring", 00:11:10.676 "raid_level": "raid1", 00:11:10.676 "superblock": true, 00:11:10.676 "num_base_bdevs": 2, 00:11:10.676 "num_base_bdevs_discovered": 1, 00:11:10.676 "num_base_bdevs_operational": 2, 00:11:10.676 "base_bdevs_list": [ 00:11:10.676 { 00:11:10.676 "name": "pt1", 00:11:10.676 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.676 "is_configured": true, 00:11:10.676 "data_offset": 2048, 00:11:10.676 "data_size": 63488 00:11:10.676 }, 00:11:10.676 { 00:11:10.676 "name": null, 00:11:10.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.676 "is_configured": false, 00:11:10.676 "data_offset": 2048, 00:11:10.676 "data_size": 63488 00:11:10.676 } 00:11:10.676 ] 00:11:10.676 }' 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.676 15:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.936 [2024-12-06 15:37:54.166296] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:10.936 [2024-12-06 15:37:54.166529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.936 [2024-12-06 15:37:54.166595] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:10.936 [2024-12-06 15:37:54.166966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.936 [2024-12-06 15:37:54.167589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.936 [2024-12-06 15:37:54.167729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:10.936 [2024-12-06 15:37:54.167927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:10.936 [2024-12-06 15:37:54.168039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:10.936 [2024-12-06 15:37:54.168230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:10.936 [2024-12-06 15:37:54.168320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:10.936 [2024-12-06 15:37:54.168668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:10.936 [2024-12-06 15:37:54.168952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:10.936 [2024-12-06 15:37:54.169049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:10.936 [2024-12-06 15:37:54.169299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.936 pt2 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.936 "name": "raid_bdev1", 00:11:10.936 "uuid": "253c1b2c-20ac-4fef-849e-f4cb12c1110d", 00:11:10.936 "strip_size_kb": 0, 00:11:10.936 "state": "online", 00:11:10.936 "raid_level": "raid1", 00:11:10.936 "superblock": true, 00:11:10.936 "num_base_bdevs": 2, 00:11:10.936 "num_base_bdevs_discovered": 2, 00:11:10.936 "num_base_bdevs_operational": 2, 00:11:10.936 "base_bdevs_list": [ 00:11:10.936 { 00:11:10.936 "name": "pt1", 00:11:10.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.936 "is_configured": true, 00:11:10.936 "data_offset": 2048, 00:11:10.936 "data_size": 63488 00:11:10.936 }, 00:11:10.936 { 00:11:10.936 "name": "pt2", 00:11:10.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.936 "is_configured": true, 00:11:10.936 "data_offset": 2048, 00:11:10.936 "data_size": 63488 00:11:10.936 } 00:11:10.936 ] 00:11:10.936 }' 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.936 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.505 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:11.505 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:11.505 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.505 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.505 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.505 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.505 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:11.505 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.505 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.505 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.505 [2024-12-06 15:37:54.558167] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.505 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.505 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.505 "name": "raid_bdev1", 00:11:11.505 "aliases": [ 00:11:11.505 "253c1b2c-20ac-4fef-849e-f4cb12c1110d" 00:11:11.505 ], 00:11:11.505 "product_name": "Raid Volume", 00:11:11.505 "block_size": 512, 00:11:11.505 "num_blocks": 63488, 00:11:11.505 "uuid": "253c1b2c-20ac-4fef-849e-f4cb12c1110d", 00:11:11.505 "assigned_rate_limits": { 00:11:11.505 "rw_ios_per_sec": 0, 00:11:11.505 "rw_mbytes_per_sec": 0, 00:11:11.505 "r_mbytes_per_sec": 0, 00:11:11.505 "w_mbytes_per_sec": 0 00:11:11.505 }, 00:11:11.505 "claimed": false, 00:11:11.505 "zoned": false, 00:11:11.505 "supported_io_types": { 00:11:11.505 "read": true, 00:11:11.505 "write": true, 00:11:11.505 "unmap": false, 00:11:11.505 "flush": false, 00:11:11.505 "reset": true, 00:11:11.505 "nvme_admin": false, 00:11:11.505 "nvme_io": false, 00:11:11.505 "nvme_io_md": false, 00:11:11.505 "write_zeroes": true, 00:11:11.505 "zcopy": false, 00:11:11.505 "get_zone_info": false, 00:11:11.505 "zone_management": false, 00:11:11.505 "zone_append": false, 00:11:11.505 "compare": false, 00:11:11.505 "compare_and_write": false, 00:11:11.505 "abort": false, 00:11:11.505 "seek_hole": false, 00:11:11.505 "seek_data": false, 00:11:11.505 "copy": false, 00:11:11.505 "nvme_iov_md": false 00:11:11.505 }, 00:11:11.505 "memory_domains": [ 00:11:11.505 { 00:11:11.505 "dma_device_id": "system", 00:11:11.505 "dma_device_type": 1 00:11:11.505 }, 00:11:11.505 { 00:11:11.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.505 "dma_device_type": 2 00:11:11.505 }, 00:11:11.505 { 00:11:11.505 "dma_device_id": "system", 00:11:11.505 "dma_device_type": 1 00:11:11.505 }, 00:11:11.505 { 00:11:11.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.505 "dma_device_type": 2 00:11:11.505 } 00:11:11.505 ], 00:11:11.505 "driver_specific": { 00:11:11.505 "raid": { 00:11:11.505 "uuid": "253c1b2c-20ac-4fef-849e-f4cb12c1110d", 00:11:11.505 "strip_size_kb": 0, 00:11:11.505 "state": "online", 00:11:11.505 "raid_level": "raid1", 00:11:11.505 "superblock": true, 00:11:11.505 "num_base_bdevs": 2, 00:11:11.505 "num_base_bdevs_discovered": 2, 00:11:11.505 "num_base_bdevs_operational": 2, 00:11:11.505 "base_bdevs_list": [ 00:11:11.505 { 00:11:11.505 "name": "pt1", 00:11:11.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.505 "is_configured": true, 00:11:11.505 "data_offset": 2048, 00:11:11.505 "data_size": 63488 00:11:11.505 }, 00:11:11.505 { 00:11:11.505 "name": "pt2", 00:11:11.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.506 "is_configured": true, 00:11:11.506 "data_offset": 2048, 00:11:11.506 "data_size": 63488 00:11:11.506 } 00:11:11.506 ] 00:11:11.506 } 00:11:11.506 } 00:11:11.506 }' 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:11.506 pt2' 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.506 [2024-12-06 15:37:54.757913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 253c1b2c-20ac-4fef-849e-f4cb12c1110d '!=' 253c1b2c-20ac-4fef-849e-f4cb12c1110d ']' 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.506 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.506 [2024-12-06 15:37:54.797725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.765 "name": "raid_bdev1", 00:11:11.765 "uuid": "253c1b2c-20ac-4fef-849e-f4cb12c1110d", 00:11:11.765 "strip_size_kb": 0, 00:11:11.765 "state": "online", 00:11:11.765 "raid_level": "raid1", 00:11:11.765 "superblock": true, 00:11:11.765 "num_base_bdevs": 2, 00:11:11.765 "num_base_bdevs_discovered": 1, 00:11:11.765 "num_base_bdevs_operational": 1, 00:11:11.765 "base_bdevs_list": [ 00:11:11.765 { 00:11:11.765 "name": null, 00:11:11.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.765 "is_configured": false, 00:11:11.765 "data_offset": 0, 00:11:11.765 "data_size": 63488 00:11:11.765 }, 00:11:11.765 { 00:11:11.765 "name": "pt2", 00:11:11.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.765 "is_configured": true, 00:11:11.765 "data_offset": 2048, 00:11:11.765 "data_size": 63488 00:11:11.765 } 00:11:11.765 ] 00:11:11.765 }' 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.765 15:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.022 [2024-12-06 15:37:55.205181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.022 [2024-12-06 15:37:55.205218] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.022 [2024-12-06 15:37:55.205325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.022 [2024-12-06 15:37:55.205385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.022 [2024-12-06 15:37:55.205402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.022 [2024-12-06 15:37:55.269041] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:12.022 [2024-12-06 15:37:55.269126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.022 [2024-12-06 15:37:55.269149] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:12.022 [2024-12-06 15:37:55.269165] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.022 [2024-12-06 15:37:55.272087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.022 [2024-12-06 15:37:55.272255] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:12.022 [2024-12-06 15:37:55.272379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:12.022 [2024-12-06 15:37:55.272447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:12.022 [2024-12-06 15:37:55.272603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:12.022 [2024-12-06 15:37:55.272620] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:12.022 [2024-12-06 15:37:55.272888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:12.022 [2024-12-06 15:37:55.273059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:12.022 [2024-12-06 15:37:55.273069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:12.022 [2024-12-06 15:37:55.273269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.022 pt2 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.022 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.281 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.281 "name": "raid_bdev1", 00:11:12.281 "uuid": "253c1b2c-20ac-4fef-849e-f4cb12c1110d", 00:11:12.281 "strip_size_kb": 0, 00:11:12.281 "state": "online", 00:11:12.281 "raid_level": "raid1", 00:11:12.281 "superblock": true, 00:11:12.281 "num_base_bdevs": 2, 00:11:12.281 "num_base_bdevs_discovered": 1, 00:11:12.281 "num_base_bdevs_operational": 1, 00:11:12.281 "base_bdevs_list": [ 00:11:12.281 { 00:11:12.281 "name": null, 00:11:12.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.281 "is_configured": false, 00:11:12.281 "data_offset": 2048, 00:11:12.281 "data_size": 63488 00:11:12.281 }, 00:11:12.281 { 00:11:12.281 "name": "pt2", 00:11:12.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.281 "is_configured": true, 00:11:12.281 "data_offset": 2048, 00:11:12.281 "data_size": 63488 00:11:12.281 } 00:11:12.281 ] 00:11:12.281 }' 00:11:12.281 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.281 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.540 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:12.540 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.540 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.540 [2024-12-06 15:37:55.672661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.540 [2024-12-06 15:37:55.672699] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.540 [2024-12-06 15:37:55.672797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.540 [2024-12-06 15:37:55.672862] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.540 [2024-12-06 15:37:55.672875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:12.540 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.540 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.540 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:12.540 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.540 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.540 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.540 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:12.540 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:12.540 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:11:12.540 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:12.540 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.541 [2024-12-06 15:37:55.728669] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:12.541 [2024-12-06 15:37:55.728751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.541 [2024-12-06 15:37:55.728780] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:12.541 [2024-12-06 15:37:55.728793] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.541 [2024-12-06 15:37:55.731735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.541 [2024-12-06 15:37:55.731776] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:12.541 [2024-12-06 15:37:55.731884] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:12.541 [2024-12-06 15:37:55.731938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:12.541 [2024-12-06 15:37:55.732114] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:12.541 [2024-12-06 15:37:55.732127] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.541 [2024-12-06 15:37:55.732147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:12.541 [2024-12-06 15:37:55.732217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:12.541 [2024-12-06 15:37:55.732298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:12.541 [2024-12-06 15:37:55.732308] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:12.541 [2024-12-06 15:37:55.732610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:12.541 [2024-12-06 15:37:55.732783] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:12.541 [2024-12-06 15:37:55.732798] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:12.541 [2024-12-06 15:37:55.732997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.541 pt1 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.541 "name": "raid_bdev1", 00:11:12.541 "uuid": "253c1b2c-20ac-4fef-849e-f4cb12c1110d", 00:11:12.541 "strip_size_kb": 0, 00:11:12.541 "state": "online", 00:11:12.541 "raid_level": "raid1", 00:11:12.541 "superblock": true, 00:11:12.541 "num_base_bdevs": 2, 00:11:12.541 "num_base_bdevs_discovered": 1, 00:11:12.541 "num_base_bdevs_operational": 1, 00:11:12.541 "base_bdevs_list": [ 00:11:12.541 { 00:11:12.541 "name": null, 00:11:12.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.541 "is_configured": false, 00:11:12.541 "data_offset": 2048, 00:11:12.541 "data_size": 63488 00:11:12.541 }, 00:11:12.541 { 00:11:12.541 "name": "pt2", 00:11:12.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.541 "is_configured": true, 00:11:12.541 "data_offset": 2048, 00:11:12.541 "data_size": 63488 00:11:12.541 } 00:11:12.541 ] 00:11:12.541 }' 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.541 15:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.110 [2024-12-06 15:37:56.156799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 253c1b2c-20ac-4fef-849e-f4cb12c1110d '!=' 253c1b2c-20ac-4fef-849e-f4cb12c1110d ']' 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63231 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63231 ']' 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63231 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63231 00:11:13.110 killing process with pid 63231 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63231' 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63231 00:11:13.110 [2024-12-06 15:37:56.237471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:13.110 15:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63231 00:11:13.110 [2024-12-06 15:37:56.237612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.110 [2024-12-06 15:37:56.237674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.110 [2024-12-06 15:37:56.237694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:13.369 [2024-12-06 15:37:56.469338] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.748 15:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:14.748 ************************************ 00:11:14.748 END TEST raid_superblock_test 00:11:14.748 ************************************ 00:11:14.748 00:11:14.748 real 0m5.901s 00:11:14.748 user 0m8.643s 00:11:14.748 sys 0m1.256s 00:11:14.748 15:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.748 15:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.748 15:37:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:11:14.748 15:37:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:14.748 15:37:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.748 15:37:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.748 ************************************ 00:11:14.748 START TEST raid_read_error_test 00:11:14.748 ************************************ 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nhbxpyWJRQ 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63561 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63561 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63561 ']' 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.748 15:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.748 [2024-12-06 15:37:57.920408] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:11:14.748 [2024-12-06 15:37:57.920812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63561 ] 00:11:15.042 [2024-12-06 15:37:58.108883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.042 [2024-12-06 15:37:58.260227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.371 [2024-12-06 15:37:58.508527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.371 [2024-12-06 15:37:58.508563] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.631 BaseBdev1_malloc 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.631 true 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.631 [2024-12-06 15:37:58.882875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:15.631 [2024-12-06 15:37:58.883090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.631 [2024-12-06 15:37:58.883126] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:15.631 [2024-12-06 15:37:58.883144] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.631 [2024-12-06 15:37:58.886061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.631 [2024-12-06 15:37:58.886123] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:15.631 BaseBdev1 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.631 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.891 BaseBdev2_malloc 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.891 true 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.891 [2024-12-06 15:37:58.958274] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:15.891 [2024-12-06 15:37:58.958443] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.891 [2024-12-06 15:37:58.958496] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:15.891 [2024-12-06 15:37:58.958595] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.891 [2024-12-06 15:37:58.961251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.891 [2024-12-06 15:37:58.961391] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:15.891 BaseBdev2 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.891 [2024-12-06 15:37:58.970370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.891 [2024-12-06 15:37:58.972903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.891 [2024-12-06 15:37:58.973104] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:15.891 [2024-12-06 15:37:58.973122] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:15.891 [2024-12-06 15:37:58.973381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:15.891 [2024-12-06 15:37:58.973604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:15.891 [2024-12-06 15:37:58.973618] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:15.891 [2024-12-06 15:37:58.973771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.891 15:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.891 15:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.891 15:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.891 "name": "raid_bdev1", 00:11:15.891 "uuid": "61e0f6f6-9fcc-45b3-ad26-b052e09f65e6", 00:11:15.891 "strip_size_kb": 0, 00:11:15.891 "state": "online", 00:11:15.891 "raid_level": "raid1", 00:11:15.891 "superblock": true, 00:11:15.891 "num_base_bdevs": 2, 00:11:15.891 "num_base_bdevs_discovered": 2, 00:11:15.891 "num_base_bdevs_operational": 2, 00:11:15.891 "base_bdevs_list": [ 00:11:15.891 { 00:11:15.891 "name": "BaseBdev1", 00:11:15.891 "uuid": "880a81e9-df50-5d78-9564-099d111a7b53", 00:11:15.891 "is_configured": true, 00:11:15.891 "data_offset": 2048, 00:11:15.891 "data_size": 63488 00:11:15.891 }, 00:11:15.891 { 00:11:15.891 "name": "BaseBdev2", 00:11:15.891 "uuid": "8bb50083-026f-50b5-aab5-408a45db41be", 00:11:15.891 "is_configured": true, 00:11:15.891 "data_offset": 2048, 00:11:15.891 "data_size": 63488 00:11:15.891 } 00:11:15.891 ] 00:11:15.891 }' 00:11:15.891 15:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.891 15:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.150 15:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:16.150 15:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:16.408 [2024-12-06 15:37:59.487468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.344 "name": "raid_bdev1", 00:11:17.344 "uuid": "61e0f6f6-9fcc-45b3-ad26-b052e09f65e6", 00:11:17.344 "strip_size_kb": 0, 00:11:17.344 "state": "online", 00:11:17.344 "raid_level": "raid1", 00:11:17.344 "superblock": true, 00:11:17.344 "num_base_bdevs": 2, 00:11:17.344 "num_base_bdevs_discovered": 2, 00:11:17.344 "num_base_bdevs_operational": 2, 00:11:17.344 "base_bdevs_list": [ 00:11:17.344 { 00:11:17.344 "name": "BaseBdev1", 00:11:17.344 "uuid": "880a81e9-df50-5d78-9564-099d111a7b53", 00:11:17.344 "is_configured": true, 00:11:17.344 "data_offset": 2048, 00:11:17.344 "data_size": 63488 00:11:17.344 }, 00:11:17.344 { 00:11:17.344 "name": "BaseBdev2", 00:11:17.344 "uuid": "8bb50083-026f-50b5-aab5-408a45db41be", 00:11:17.344 "is_configured": true, 00:11:17.344 "data_offset": 2048, 00:11:17.344 "data_size": 63488 00:11:17.344 } 00:11:17.344 ] 00:11:17.344 }' 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.344 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.601 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:17.601 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.601 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.601 [2024-12-06 15:38:00.830826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.601 [2024-12-06 15:38:00.830884] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.601 [2024-12-06 15:38:00.833704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.601 [2024-12-06 15:38:00.833766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.601 [2024-12-06 15:38:00.833862] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.601 [2024-12-06 15:38:00.833879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:17.601 { 00:11:17.601 "results": [ 00:11:17.601 { 00:11:17.601 "job": "raid_bdev1", 00:11:17.601 "core_mask": "0x1", 00:11:17.601 "workload": "randrw", 00:11:17.601 "percentage": 50, 00:11:17.601 "status": "finished", 00:11:17.601 "queue_depth": 1, 00:11:17.601 "io_size": 131072, 00:11:17.601 "runtime": 1.343158, 00:11:17.601 "iops": 14222.451863444212, 00:11:17.601 "mibps": 1777.8064829305265, 00:11:17.601 "io_failed": 0, 00:11:17.601 "io_timeout": 0, 00:11:17.601 "avg_latency_us": 67.64362072695324, 00:11:17.601 "min_latency_us": 23.338152610441767, 00:11:17.601 "max_latency_us": 1493.641767068273 00:11:17.601 } 00:11:17.601 ], 00:11:17.601 "core_count": 1 00:11:17.601 } 00:11:17.601 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.601 15:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63561 00:11:17.601 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63561 ']' 00:11:17.601 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63561 00:11:17.601 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:17.602 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.602 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63561 00:11:17.602 killing process with pid 63561 00:11:17.602 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.602 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.602 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63561' 00:11:17.602 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63561 00:11:17.602 15:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63561 00:11:17.602 [2024-12-06 15:38:00.884596] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.870 [2024-12-06 15:38:01.036983] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.240 15:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nhbxpyWJRQ 00:11:19.240 15:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:19.240 15:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:19.240 ************************************ 00:11:19.240 END TEST raid_read_error_test 00:11:19.240 ************************************ 00:11:19.240 15:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:19.240 15:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:19.240 15:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:19.240 15:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:19.240 15:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:19.240 00:11:19.240 real 0m4.567s 00:11:19.240 user 0m5.311s 00:11:19.240 sys 0m0.716s 00:11:19.240 15:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.240 15:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.240 15:38:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:11:19.240 15:38:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:19.240 15:38:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.240 15:38:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.240 ************************************ 00:11:19.240 START TEST raid_write_error_test 00:11:19.240 ************************************ 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:19.240 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wOveCQfZKE 00:11:19.241 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63701 00:11:19.241 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:19.241 15:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63701 00:11:19.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.241 15:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63701 ']' 00:11:19.241 15:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.241 15:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.241 15:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.241 15:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.241 15:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.498 [2024-12-06 15:38:02.572279] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:11:19.499 [2024-12-06 15:38:02.572438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63701 ] 00:11:19.499 [2024-12-06 15:38:02.758823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.756 [2024-12-06 15:38:02.900196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.013 [2024-12-06 15:38:03.147593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.014 [2024-12-06 15:38:03.147638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.272 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.272 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:20.272 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.272 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:20.272 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.272 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.272 BaseBdev1_malloc 00:11:20.272 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.272 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:20.272 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.272 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.272 true 00:11:20.272 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.272 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:20.272 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.272 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.272 [2024-12-06 15:38:03.491873] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:20.272 [2024-12-06 15:38:03.491945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.272 [2024-12-06 15:38:03.491973] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:20.272 [2024-12-06 15:38:03.491989] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.272 [2024-12-06 15:38:03.494759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.273 [2024-12-06 15:38:03.494932] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:20.273 BaseBdev1 00:11:20.273 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.273 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.273 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:20.273 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.273 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.273 BaseBdev2_malloc 00:11:20.273 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.273 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:20.273 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.273 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.273 true 00:11:20.273 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.273 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:20.273 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.273 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.532 [2024-12-06 15:38:03.570940] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:20.532 [2024-12-06 15:38:03.571007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.532 [2024-12-06 15:38:03.571029] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:20.532 [2024-12-06 15:38:03.571044] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.532 [2024-12-06 15:38:03.573870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.533 [2024-12-06 15:38:03.574048] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:20.533 BaseBdev2 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.533 [2024-12-06 15:38:03.582995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.533 [2024-12-06 15:38:03.585402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.533 [2024-12-06 15:38:03.585765] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:20.533 [2024-12-06 15:38:03.585788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:20.533 [2024-12-06 15:38:03.586074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:20.533 [2024-12-06 15:38:03.586279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:20.533 [2024-12-06 15:38:03.586291] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:20.533 [2024-12-06 15:38:03.586457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.533 "name": "raid_bdev1", 00:11:20.533 "uuid": "235b2716-03ba-4cee-ad54-208d8007d74e", 00:11:20.533 "strip_size_kb": 0, 00:11:20.533 "state": "online", 00:11:20.533 "raid_level": "raid1", 00:11:20.533 "superblock": true, 00:11:20.533 "num_base_bdevs": 2, 00:11:20.533 "num_base_bdevs_discovered": 2, 00:11:20.533 "num_base_bdevs_operational": 2, 00:11:20.533 "base_bdevs_list": [ 00:11:20.533 { 00:11:20.533 "name": "BaseBdev1", 00:11:20.533 "uuid": "ca4f0c10-9924-5736-851f-7f591b1993ec", 00:11:20.533 "is_configured": true, 00:11:20.533 "data_offset": 2048, 00:11:20.533 "data_size": 63488 00:11:20.533 }, 00:11:20.533 { 00:11:20.533 "name": "BaseBdev2", 00:11:20.533 "uuid": "53be9709-f10f-5f59-bce1-c1a9340f2e9b", 00:11:20.533 "is_configured": true, 00:11:20.533 "data_offset": 2048, 00:11:20.533 "data_size": 63488 00:11:20.533 } 00:11:20.533 ] 00:11:20.533 }' 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.533 15:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.792 15:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:20.792 15:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:21.051 [2024-12-06 15:38:04.112300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.988 [2024-12-06 15:38:05.023125] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:21.988 [2024-12-06 15:38:05.023203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:21.988 [2024-12-06 15:38:05.023433] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.988 "name": "raid_bdev1", 00:11:21.988 "uuid": "235b2716-03ba-4cee-ad54-208d8007d74e", 00:11:21.988 "strip_size_kb": 0, 00:11:21.988 "state": "online", 00:11:21.988 "raid_level": "raid1", 00:11:21.988 "superblock": true, 00:11:21.988 "num_base_bdevs": 2, 00:11:21.988 "num_base_bdevs_discovered": 1, 00:11:21.988 "num_base_bdevs_operational": 1, 00:11:21.988 "base_bdevs_list": [ 00:11:21.988 { 00:11:21.988 "name": null, 00:11:21.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.988 "is_configured": false, 00:11:21.988 "data_offset": 0, 00:11:21.988 "data_size": 63488 00:11:21.988 }, 00:11:21.988 { 00:11:21.988 "name": "BaseBdev2", 00:11:21.988 "uuid": "53be9709-f10f-5f59-bce1-c1a9340f2e9b", 00:11:21.988 "is_configured": true, 00:11:21.988 "data_offset": 2048, 00:11:21.988 "data_size": 63488 00:11:21.988 } 00:11:21.988 ] 00:11:21.988 }' 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.988 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.247 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:22.247 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.247 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.247 [2024-12-06 15:38:05.452882] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.247 [2024-12-06 15:38:05.453083] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.247 [2024-12-06 15:38:05.456082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.247 [2024-12-06 15:38:05.456258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.247 [2024-12-06 15:38:05.456346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.247 [2024-12-06 15:38:05.456364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:22.247 { 00:11:22.247 "results": [ 00:11:22.247 { 00:11:22.247 "job": "raid_bdev1", 00:11:22.247 "core_mask": "0x1", 00:11:22.247 "workload": "randrw", 00:11:22.247 "percentage": 50, 00:11:22.247 "status": "finished", 00:11:22.247 "queue_depth": 1, 00:11:22.247 "io_size": 131072, 00:11:22.247 "runtime": 1.340302, 00:11:22.247 "iops": 16796.214584474244, 00:11:22.247 "mibps": 2099.5268230592806, 00:11:22.247 "io_failed": 0, 00:11:22.247 "io_timeout": 0, 00:11:22.247 "avg_latency_us": 56.86158764410878, 00:11:22.247 "min_latency_us": 24.160642570281123, 00:11:22.247 "max_latency_us": 1612.0803212851406 00:11:22.247 } 00:11:22.247 ], 00:11:22.247 "core_count": 1 00:11:22.247 } 00:11:22.247 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.247 15:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63701 00:11:22.247 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63701 ']' 00:11:22.247 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63701 00:11:22.247 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:22.247 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.247 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63701 00:11:22.247 killing process with pid 63701 00:11:22.247 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.247 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.247 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63701' 00:11:22.247 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63701 00:11:22.247 15:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63701 00:11:22.247 [2024-12-06 15:38:05.498985] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.515 [2024-12-06 15:38:05.651130] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.904 15:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wOveCQfZKE 00:11:23.904 15:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:23.904 15:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:23.904 15:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:23.904 15:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:23.904 15:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.904 15:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:23.904 ************************************ 00:11:23.904 END TEST raid_write_error_test 00:11:23.904 ************************************ 00:11:23.904 15:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:23.904 00:11:23.904 real 0m4.545s 00:11:23.904 user 0m5.223s 00:11:23.904 sys 0m0.737s 00:11:23.904 15:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.904 15:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.904 15:38:07 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:23.904 15:38:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:23.904 15:38:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:23.904 15:38:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:23.904 15:38:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.904 15:38:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.904 ************************************ 00:11:23.904 START TEST raid_state_function_test 00:11:23.904 ************************************ 00:11:23.904 15:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:11:23.904 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:23.904 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:23.904 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:23.904 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:23.904 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:23.904 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.904 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:23.904 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.904 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63845 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63845' 00:11:23.905 Process raid pid: 63845 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63845 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63845 ']' 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.905 15:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.905 [2024-12-06 15:38:07.185094] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:11:23.905 [2024-12-06 15:38:07.185265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.163 [2024-12-06 15:38:07.372595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.421 [2024-12-06 15:38:07.527752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.680 [2024-12-06 15:38:07.786753] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.680 [2024-12-06 15:38:07.786978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.940 [2024-12-06 15:38:08.118371] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.940 [2024-12-06 15:38:08.118461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.940 [2024-12-06 15:38:08.118476] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.940 [2024-12-06 15:38:08.118491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.940 [2024-12-06 15:38:08.118499] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.940 [2024-12-06 15:38:08.118530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.940 "name": "Existed_Raid", 00:11:24.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.940 "strip_size_kb": 64, 00:11:24.940 "state": "configuring", 00:11:24.940 "raid_level": "raid0", 00:11:24.940 "superblock": false, 00:11:24.940 "num_base_bdevs": 3, 00:11:24.940 "num_base_bdevs_discovered": 0, 00:11:24.940 "num_base_bdevs_operational": 3, 00:11:24.940 "base_bdevs_list": [ 00:11:24.940 { 00:11:24.940 "name": "BaseBdev1", 00:11:24.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.940 "is_configured": false, 00:11:24.940 "data_offset": 0, 00:11:24.940 "data_size": 0 00:11:24.940 }, 00:11:24.940 { 00:11:24.940 "name": "BaseBdev2", 00:11:24.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.940 "is_configured": false, 00:11:24.940 "data_offset": 0, 00:11:24.940 "data_size": 0 00:11:24.940 }, 00:11:24.940 { 00:11:24.940 "name": "BaseBdev3", 00:11:24.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.940 "is_configured": false, 00:11:24.940 "data_offset": 0, 00:11:24.940 "data_size": 0 00:11:24.940 } 00:11:24.940 ] 00:11:24.940 }' 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.940 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.509 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:25.509 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.509 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.509 [2024-12-06 15:38:08.578294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:25.509 [2024-12-06 15:38:08.578345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:25.509 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.509 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:25.509 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.509 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.509 [2024-12-06 15:38:08.590320] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.509 [2024-12-06 15:38:08.590402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.509 [2024-12-06 15:38:08.590418] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.509 [2024-12-06 15:38:08.590433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.509 [2024-12-06 15:38:08.590443] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:25.509 [2024-12-06 15:38:08.590456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:25.509 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.509 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:25.509 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.509 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.509 [2024-12-06 15:38:08.644112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.509 BaseBdev1 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.510 [ 00:11:25.510 { 00:11:25.510 "name": "BaseBdev1", 00:11:25.510 "aliases": [ 00:11:25.510 "d3c1ce73-2ee1-44af-bec9-e4da877fe5ca" 00:11:25.510 ], 00:11:25.510 "product_name": "Malloc disk", 00:11:25.510 "block_size": 512, 00:11:25.510 "num_blocks": 65536, 00:11:25.510 "uuid": "d3c1ce73-2ee1-44af-bec9-e4da877fe5ca", 00:11:25.510 "assigned_rate_limits": { 00:11:25.510 "rw_ios_per_sec": 0, 00:11:25.510 "rw_mbytes_per_sec": 0, 00:11:25.510 "r_mbytes_per_sec": 0, 00:11:25.510 "w_mbytes_per_sec": 0 00:11:25.510 }, 00:11:25.510 "claimed": true, 00:11:25.510 "claim_type": "exclusive_write", 00:11:25.510 "zoned": false, 00:11:25.510 "supported_io_types": { 00:11:25.510 "read": true, 00:11:25.510 "write": true, 00:11:25.510 "unmap": true, 00:11:25.510 "flush": true, 00:11:25.510 "reset": true, 00:11:25.510 "nvme_admin": false, 00:11:25.510 "nvme_io": false, 00:11:25.510 "nvme_io_md": false, 00:11:25.510 "write_zeroes": true, 00:11:25.510 "zcopy": true, 00:11:25.510 "get_zone_info": false, 00:11:25.510 "zone_management": false, 00:11:25.510 "zone_append": false, 00:11:25.510 "compare": false, 00:11:25.510 "compare_and_write": false, 00:11:25.510 "abort": true, 00:11:25.510 "seek_hole": false, 00:11:25.510 "seek_data": false, 00:11:25.510 "copy": true, 00:11:25.510 "nvme_iov_md": false 00:11:25.510 }, 00:11:25.510 "memory_domains": [ 00:11:25.510 { 00:11:25.510 "dma_device_id": "system", 00:11:25.510 "dma_device_type": 1 00:11:25.510 }, 00:11:25.510 { 00:11:25.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.510 "dma_device_type": 2 00:11:25.510 } 00:11:25.510 ], 00:11:25.510 "driver_specific": {} 00:11:25.510 } 00:11:25.510 ] 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.510 "name": "Existed_Raid", 00:11:25.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.510 "strip_size_kb": 64, 00:11:25.510 "state": "configuring", 00:11:25.510 "raid_level": "raid0", 00:11:25.510 "superblock": false, 00:11:25.510 "num_base_bdevs": 3, 00:11:25.510 "num_base_bdevs_discovered": 1, 00:11:25.510 "num_base_bdevs_operational": 3, 00:11:25.510 "base_bdevs_list": [ 00:11:25.510 { 00:11:25.510 "name": "BaseBdev1", 00:11:25.510 "uuid": "d3c1ce73-2ee1-44af-bec9-e4da877fe5ca", 00:11:25.510 "is_configured": true, 00:11:25.510 "data_offset": 0, 00:11:25.510 "data_size": 65536 00:11:25.510 }, 00:11:25.510 { 00:11:25.510 "name": "BaseBdev2", 00:11:25.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.510 "is_configured": false, 00:11:25.510 "data_offset": 0, 00:11:25.510 "data_size": 0 00:11:25.510 }, 00:11:25.510 { 00:11:25.510 "name": "BaseBdev3", 00:11:25.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.510 "is_configured": false, 00:11:25.510 "data_offset": 0, 00:11:25.510 "data_size": 0 00:11:25.510 } 00:11:25.510 ] 00:11:25.510 }' 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.510 15:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.079 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:26.079 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.079 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.079 [2024-12-06 15:38:09.135529] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.079 [2024-12-06 15:38:09.135608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:26.079 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.079 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:26.079 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.079 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.079 [2024-12-06 15:38:09.147603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.079 [2024-12-06 15:38:09.150306] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.079 [2024-12-06 15:38:09.150370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.079 [2024-12-06 15:38:09.150384] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:26.079 [2024-12-06 15:38:09.150398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:26.079 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.079 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:26.079 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.079 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:26.079 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.079 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.079 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.080 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.080 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.080 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.080 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.080 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.080 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.080 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.080 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.080 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.080 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.080 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.080 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.080 "name": "Existed_Raid", 00:11:26.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.080 "strip_size_kb": 64, 00:11:26.080 "state": "configuring", 00:11:26.080 "raid_level": "raid0", 00:11:26.080 "superblock": false, 00:11:26.080 "num_base_bdevs": 3, 00:11:26.080 "num_base_bdevs_discovered": 1, 00:11:26.080 "num_base_bdevs_operational": 3, 00:11:26.080 "base_bdevs_list": [ 00:11:26.080 { 00:11:26.080 "name": "BaseBdev1", 00:11:26.080 "uuid": "d3c1ce73-2ee1-44af-bec9-e4da877fe5ca", 00:11:26.080 "is_configured": true, 00:11:26.080 "data_offset": 0, 00:11:26.080 "data_size": 65536 00:11:26.080 }, 00:11:26.080 { 00:11:26.080 "name": "BaseBdev2", 00:11:26.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.080 "is_configured": false, 00:11:26.080 "data_offset": 0, 00:11:26.080 "data_size": 0 00:11:26.080 }, 00:11:26.080 { 00:11:26.080 "name": "BaseBdev3", 00:11:26.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.080 "is_configured": false, 00:11:26.080 "data_offset": 0, 00:11:26.080 "data_size": 0 00:11:26.080 } 00:11:26.080 ] 00:11:26.080 }' 00:11:26.080 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.080 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.339 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:26.339 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.339 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.599 [2024-12-06 15:38:09.655118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.599 BaseBdev2 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.599 [ 00:11:26.599 { 00:11:26.599 "name": "BaseBdev2", 00:11:26.599 "aliases": [ 00:11:26.599 "7ea81a27-4070-435f-83c9-2b7d69f28a74" 00:11:26.599 ], 00:11:26.599 "product_name": "Malloc disk", 00:11:26.599 "block_size": 512, 00:11:26.599 "num_blocks": 65536, 00:11:26.599 "uuid": "7ea81a27-4070-435f-83c9-2b7d69f28a74", 00:11:26.599 "assigned_rate_limits": { 00:11:26.599 "rw_ios_per_sec": 0, 00:11:26.599 "rw_mbytes_per_sec": 0, 00:11:26.599 "r_mbytes_per_sec": 0, 00:11:26.599 "w_mbytes_per_sec": 0 00:11:26.599 }, 00:11:26.599 "claimed": true, 00:11:26.599 "claim_type": "exclusive_write", 00:11:26.599 "zoned": false, 00:11:26.599 "supported_io_types": { 00:11:26.599 "read": true, 00:11:26.599 "write": true, 00:11:26.599 "unmap": true, 00:11:26.599 "flush": true, 00:11:26.599 "reset": true, 00:11:26.599 "nvme_admin": false, 00:11:26.599 "nvme_io": false, 00:11:26.599 "nvme_io_md": false, 00:11:26.599 "write_zeroes": true, 00:11:26.599 "zcopy": true, 00:11:26.599 "get_zone_info": false, 00:11:26.599 "zone_management": false, 00:11:26.599 "zone_append": false, 00:11:26.599 "compare": false, 00:11:26.599 "compare_and_write": false, 00:11:26.599 "abort": true, 00:11:26.599 "seek_hole": false, 00:11:26.599 "seek_data": false, 00:11:26.599 "copy": true, 00:11:26.599 "nvme_iov_md": false 00:11:26.599 }, 00:11:26.599 "memory_domains": [ 00:11:26.599 { 00:11:26.599 "dma_device_id": "system", 00:11:26.599 "dma_device_type": 1 00:11:26.599 }, 00:11:26.599 { 00:11:26.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.599 "dma_device_type": 2 00:11:26.599 } 00:11:26.599 ], 00:11:26.599 "driver_specific": {} 00:11:26.599 } 00:11:26.599 ] 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.599 "name": "Existed_Raid", 00:11:26.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.599 "strip_size_kb": 64, 00:11:26.599 "state": "configuring", 00:11:26.599 "raid_level": "raid0", 00:11:26.599 "superblock": false, 00:11:26.599 "num_base_bdevs": 3, 00:11:26.599 "num_base_bdevs_discovered": 2, 00:11:26.599 "num_base_bdevs_operational": 3, 00:11:26.599 "base_bdevs_list": [ 00:11:26.599 { 00:11:26.599 "name": "BaseBdev1", 00:11:26.599 "uuid": "d3c1ce73-2ee1-44af-bec9-e4da877fe5ca", 00:11:26.599 "is_configured": true, 00:11:26.599 "data_offset": 0, 00:11:26.599 "data_size": 65536 00:11:26.599 }, 00:11:26.599 { 00:11:26.599 "name": "BaseBdev2", 00:11:26.599 "uuid": "7ea81a27-4070-435f-83c9-2b7d69f28a74", 00:11:26.599 "is_configured": true, 00:11:26.599 "data_offset": 0, 00:11:26.599 "data_size": 65536 00:11:26.599 }, 00:11:26.599 { 00:11:26.599 "name": "BaseBdev3", 00:11:26.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.599 "is_configured": false, 00:11:26.599 "data_offset": 0, 00:11:26.599 "data_size": 0 00:11:26.599 } 00:11:26.599 ] 00:11:26.599 }' 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.599 15:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.859 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.859 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.859 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.118 [2024-12-06 15:38:10.204110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.118 [2024-12-06 15:38:10.204467] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:27.118 [2024-12-06 15:38:10.204582] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:27.118 [2024-12-06 15:38:10.205049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:27.118 [2024-12-06 15:38:10.205391] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:27.118 [2024-12-06 15:38:10.205521] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:27.118 [2024-12-06 15:38:10.205991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.118 BaseBdev3 00:11:27.118 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.118 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:27.118 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:27.118 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.118 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.118 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.118 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.118 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.118 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.118 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.118 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.118 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:27.118 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.118 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.118 [ 00:11:27.118 { 00:11:27.118 "name": "BaseBdev3", 00:11:27.118 "aliases": [ 00:11:27.118 "03cc1e1b-3679-4aba-a4ba-4b88d74bdfcb" 00:11:27.118 ], 00:11:27.118 "product_name": "Malloc disk", 00:11:27.118 "block_size": 512, 00:11:27.118 "num_blocks": 65536, 00:11:27.118 "uuid": "03cc1e1b-3679-4aba-a4ba-4b88d74bdfcb", 00:11:27.118 "assigned_rate_limits": { 00:11:27.118 "rw_ios_per_sec": 0, 00:11:27.118 "rw_mbytes_per_sec": 0, 00:11:27.118 "r_mbytes_per_sec": 0, 00:11:27.118 "w_mbytes_per_sec": 0 00:11:27.118 }, 00:11:27.118 "claimed": true, 00:11:27.118 "claim_type": "exclusive_write", 00:11:27.118 "zoned": false, 00:11:27.118 "supported_io_types": { 00:11:27.118 "read": true, 00:11:27.118 "write": true, 00:11:27.118 "unmap": true, 00:11:27.118 "flush": true, 00:11:27.118 "reset": true, 00:11:27.118 "nvme_admin": false, 00:11:27.118 "nvme_io": false, 00:11:27.118 "nvme_io_md": false, 00:11:27.118 "write_zeroes": true, 00:11:27.118 "zcopy": true, 00:11:27.118 "get_zone_info": false, 00:11:27.118 "zone_management": false, 00:11:27.118 "zone_append": false, 00:11:27.118 "compare": false, 00:11:27.119 "compare_and_write": false, 00:11:27.119 "abort": true, 00:11:27.119 "seek_hole": false, 00:11:27.119 "seek_data": false, 00:11:27.119 "copy": true, 00:11:27.119 "nvme_iov_md": false 00:11:27.119 }, 00:11:27.119 "memory_domains": [ 00:11:27.119 { 00:11:27.119 "dma_device_id": "system", 00:11:27.119 "dma_device_type": 1 00:11:27.119 }, 00:11:27.119 { 00:11:27.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.119 "dma_device_type": 2 00:11:27.119 } 00:11:27.119 ], 00:11:27.119 "driver_specific": {} 00:11:27.119 } 00:11:27.119 ] 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.119 "name": "Existed_Raid", 00:11:27.119 "uuid": "52b12344-9f2b-4183-8ec0-20ac468cc524", 00:11:27.119 "strip_size_kb": 64, 00:11:27.119 "state": "online", 00:11:27.119 "raid_level": "raid0", 00:11:27.119 "superblock": false, 00:11:27.119 "num_base_bdevs": 3, 00:11:27.119 "num_base_bdevs_discovered": 3, 00:11:27.119 "num_base_bdevs_operational": 3, 00:11:27.119 "base_bdevs_list": [ 00:11:27.119 { 00:11:27.119 "name": "BaseBdev1", 00:11:27.119 "uuid": "d3c1ce73-2ee1-44af-bec9-e4da877fe5ca", 00:11:27.119 "is_configured": true, 00:11:27.119 "data_offset": 0, 00:11:27.119 "data_size": 65536 00:11:27.119 }, 00:11:27.119 { 00:11:27.119 "name": "BaseBdev2", 00:11:27.119 "uuid": "7ea81a27-4070-435f-83c9-2b7d69f28a74", 00:11:27.119 "is_configured": true, 00:11:27.119 "data_offset": 0, 00:11:27.119 "data_size": 65536 00:11:27.119 }, 00:11:27.119 { 00:11:27.119 "name": "BaseBdev3", 00:11:27.119 "uuid": "03cc1e1b-3679-4aba-a4ba-4b88d74bdfcb", 00:11:27.119 "is_configured": true, 00:11:27.119 "data_offset": 0, 00:11:27.119 "data_size": 65536 00:11:27.119 } 00:11:27.119 ] 00:11:27.119 }' 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.119 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.379 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:27.379 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:27.379 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:27.379 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:27.379 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:27.379 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:27.379 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:27.379 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.379 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.379 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:27.379 [2024-12-06 15:38:10.667969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.638 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.638 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:27.638 "name": "Existed_Raid", 00:11:27.638 "aliases": [ 00:11:27.638 "52b12344-9f2b-4183-8ec0-20ac468cc524" 00:11:27.638 ], 00:11:27.638 "product_name": "Raid Volume", 00:11:27.638 "block_size": 512, 00:11:27.638 "num_blocks": 196608, 00:11:27.638 "uuid": "52b12344-9f2b-4183-8ec0-20ac468cc524", 00:11:27.638 "assigned_rate_limits": { 00:11:27.638 "rw_ios_per_sec": 0, 00:11:27.638 "rw_mbytes_per_sec": 0, 00:11:27.638 "r_mbytes_per_sec": 0, 00:11:27.638 "w_mbytes_per_sec": 0 00:11:27.638 }, 00:11:27.638 "claimed": false, 00:11:27.638 "zoned": false, 00:11:27.639 "supported_io_types": { 00:11:27.639 "read": true, 00:11:27.639 "write": true, 00:11:27.639 "unmap": true, 00:11:27.639 "flush": true, 00:11:27.639 "reset": true, 00:11:27.639 "nvme_admin": false, 00:11:27.639 "nvme_io": false, 00:11:27.639 "nvme_io_md": false, 00:11:27.639 "write_zeroes": true, 00:11:27.639 "zcopy": false, 00:11:27.639 "get_zone_info": false, 00:11:27.639 "zone_management": false, 00:11:27.639 "zone_append": false, 00:11:27.639 "compare": false, 00:11:27.639 "compare_and_write": false, 00:11:27.639 "abort": false, 00:11:27.639 "seek_hole": false, 00:11:27.639 "seek_data": false, 00:11:27.639 "copy": false, 00:11:27.639 "nvme_iov_md": false 00:11:27.639 }, 00:11:27.639 "memory_domains": [ 00:11:27.639 { 00:11:27.639 "dma_device_id": "system", 00:11:27.639 "dma_device_type": 1 00:11:27.639 }, 00:11:27.639 { 00:11:27.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.639 "dma_device_type": 2 00:11:27.639 }, 00:11:27.639 { 00:11:27.639 "dma_device_id": "system", 00:11:27.639 "dma_device_type": 1 00:11:27.639 }, 00:11:27.639 { 00:11:27.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.639 "dma_device_type": 2 00:11:27.639 }, 00:11:27.639 { 00:11:27.639 "dma_device_id": "system", 00:11:27.639 "dma_device_type": 1 00:11:27.639 }, 00:11:27.639 { 00:11:27.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.639 "dma_device_type": 2 00:11:27.639 } 00:11:27.639 ], 00:11:27.639 "driver_specific": { 00:11:27.639 "raid": { 00:11:27.639 "uuid": "52b12344-9f2b-4183-8ec0-20ac468cc524", 00:11:27.639 "strip_size_kb": 64, 00:11:27.639 "state": "online", 00:11:27.639 "raid_level": "raid0", 00:11:27.639 "superblock": false, 00:11:27.639 "num_base_bdevs": 3, 00:11:27.639 "num_base_bdevs_discovered": 3, 00:11:27.639 "num_base_bdevs_operational": 3, 00:11:27.639 "base_bdevs_list": [ 00:11:27.639 { 00:11:27.639 "name": "BaseBdev1", 00:11:27.639 "uuid": "d3c1ce73-2ee1-44af-bec9-e4da877fe5ca", 00:11:27.639 "is_configured": true, 00:11:27.639 "data_offset": 0, 00:11:27.639 "data_size": 65536 00:11:27.639 }, 00:11:27.639 { 00:11:27.639 "name": "BaseBdev2", 00:11:27.639 "uuid": "7ea81a27-4070-435f-83c9-2b7d69f28a74", 00:11:27.639 "is_configured": true, 00:11:27.639 "data_offset": 0, 00:11:27.639 "data_size": 65536 00:11:27.639 }, 00:11:27.639 { 00:11:27.639 "name": "BaseBdev3", 00:11:27.639 "uuid": "03cc1e1b-3679-4aba-a4ba-4b88d74bdfcb", 00:11:27.639 "is_configured": true, 00:11:27.639 "data_offset": 0, 00:11:27.639 "data_size": 65536 00:11:27.639 } 00:11:27.639 ] 00:11:27.639 } 00:11:27.639 } 00:11:27.639 }' 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:27.639 BaseBdev2 00:11:27.639 BaseBdev3' 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.639 15:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.639 [2024-12-06 15:38:10.927287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:27.639 [2024-12-06 15:38:10.927320] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.639 [2024-12-06 15:38:10.927392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.898 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.898 "name": "Existed_Raid", 00:11:27.898 "uuid": "52b12344-9f2b-4183-8ec0-20ac468cc524", 00:11:27.898 "strip_size_kb": 64, 00:11:27.898 "state": "offline", 00:11:27.898 "raid_level": "raid0", 00:11:27.898 "superblock": false, 00:11:27.898 "num_base_bdevs": 3, 00:11:27.898 "num_base_bdevs_discovered": 2, 00:11:27.898 "num_base_bdevs_operational": 2, 00:11:27.898 "base_bdevs_list": [ 00:11:27.898 { 00:11:27.898 "name": null, 00:11:27.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.898 "is_configured": false, 00:11:27.898 "data_offset": 0, 00:11:27.898 "data_size": 65536 00:11:27.898 }, 00:11:27.898 { 00:11:27.898 "name": "BaseBdev2", 00:11:27.898 "uuid": "7ea81a27-4070-435f-83c9-2b7d69f28a74", 00:11:27.898 "is_configured": true, 00:11:27.898 "data_offset": 0, 00:11:27.898 "data_size": 65536 00:11:27.898 }, 00:11:27.898 { 00:11:27.898 "name": "BaseBdev3", 00:11:27.898 "uuid": "03cc1e1b-3679-4aba-a4ba-4b88d74bdfcb", 00:11:27.898 "is_configured": true, 00:11:27.898 "data_offset": 0, 00:11:27.898 "data_size": 65536 00:11:27.898 } 00:11:27.898 ] 00:11:27.898 }' 00:11:27.899 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.899 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.467 [2024-12-06 15:38:11.539405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.467 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.467 [2024-12-06 15:38:11.701926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:28.467 [2024-12-06 15:38:11.702185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.726 BaseBdev2 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.726 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.726 [ 00:11:28.726 { 00:11:28.726 "name": "BaseBdev2", 00:11:28.726 "aliases": [ 00:11:28.726 "1f420d93-8ca5-41cb-8828-ba3ac05961c3" 00:11:28.726 ], 00:11:28.726 "product_name": "Malloc disk", 00:11:28.726 "block_size": 512, 00:11:28.726 "num_blocks": 65536, 00:11:28.726 "uuid": "1f420d93-8ca5-41cb-8828-ba3ac05961c3", 00:11:28.726 "assigned_rate_limits": { 00:11:28.726 "rw_ios_per_sec": 0, 00:11:28.726 "rw_mbytes_per_sec": 0, 00:11:28.726 "r_mbytes_per_sec": 0, 00:11:28.726 "w_mbytes_per_sec": 0 00:11:28.726 }, 00:11:28.726 "claimed": false, 00:11:28.726 "zoned": false, 00:11:28.726 "supported_io_types": { 00:11:28.726 "read": true, 00:11:28.726 "write": true, 00:11:28.726 "unmap": true, 00:11:28.726 "flush": true, 00:11:28.726 "reset": true, 00:11:28.726 "nvme_admin": false, 00:11:28.726 "nvme_io": false, 00:11:28.726 "nvme_io_md": false, 00:11:28.726 "write_zeroes": true, 00:11:28.726 "zcopy": true, 00:11:28.726 "get_zone_info": false, 00:11:28.726 "zone_management": false, 00:11:28.726 "zone_append": false, 00:11:28.726 "compare": false, 00:11:28.726 "compare_and_write": false, 00:11:28.726 "abort": true, 00:11:28.726 "seek_hole": false, 00:11:28.726 "seek_data": false, 00:11:28.726 "copy": true, 00:11:28.726 "nvme_iov_md": false 00:11:28.726 }, 00:11:28.726 "memory_domains": [ 00:11:28.726 { 00:11:28.726 "dma_device_id": "system", 00:11:28.726 "dma_device_type": 1 00:11:28.726 }, 00:11:28.726 { 00:11:28.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.726 "dma_device_type": 2 00:11:28.726 } 00:11:28.726 ], 00:11:28.726 "driver_specific": {} 00:11:28.726 } 00:11:28.726 ] 00:11:28.727 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.727 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.727 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:28.727 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.727 15:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:28.727 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.727 15:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.727 BaseBdev3 00:11:28.727 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.727 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:28.727 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:28.727 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.727 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.727 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.727 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.727 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.727 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.727 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.986 [ 00:11:28.986 { 00:11:28.986 "name": "BaseBdev3", 00:11:28.986 "aliases": [ 00:11:28.986 "ddd46f19-426c-4958-a6d7-f68298f31d61" 00:11:28.986 ], 00:11:28.986 "product_name": "Malloc disk", 00:11:28.986 "block_size": 512, 00:11:28.986 "num_blocks": 65536, 00:11:28.986 "uuid": "ddd46f19-426c-4958-a6d7-f68298f31d61", 00:11:28.986 "assigned_rate_limits": { 00:11:28.986 "rw_ios_per_sec": 0, 00:11:28.986 "rw_mbytes_per_sec": 0, 00:11:28.986 "r_mbytes_per_sec": 0, 00:11:28.986 "w_mbytes_per_sec": 0 00:11:28.986 }, 00:11:28.986 "claimed": false, 00:11:28.986 "zoned": false, 00:11:28.986 "supported_io_types": { 00:11:28.986 "read": true, 00:11:28.986 "write": true, 00:11:28.986 "unmap": true, 00:11:28.986 "flush": true, 00:11:28.986 "reset": true, 00:11:28.986 "nvme_admin": false, 00:11:28.986 "nvme_io": false, 00:11:28.986 "nvme_io_md": false, 00:11:28.986 "write_zeroes": true, 00:11:28.986 "zcopy": true, 00:11:28.986 "get_zone_info": false, 00:11:28.986 "zone_management": false, 00:11:28.986 "zone_append": false, 00:11:28.986 "compare": false, 00:11:28.986 "compare_and_write": false, 00:11:28.986 "abort": true, 00:11:28.986 "seek_hole": false, 00:11:28.986 "seek_data": false, 00:11:28.986 "copy": true, 00:11:28.986 "nvme_iov_md": false 00:11:28.986 }, 00:11:28.986 "memory_domains": [ 00:11:28.986 { 00:11:28.986 "dma_device_id": "system", 00:11:28.986 "dma_device_type": 1 00:11:28.986 }, 00:11:28.986 { 00:11:28.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.986 "dma_device_type": 2 00:11:28.986 } 00:11:28.986 ], 00:11:28.986 "driver_specific": {} 00:11:28.986 } 00:11:28.986 ] 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.986 [2024-12-06 15:38:12.067081] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.986 [2024-12-06 15:38:12.067160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.986 [2024-12-06 15:38:12.067200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.986 [2024-12-06 15:38:12.069802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.986 "name": "Existed_Raid", 00:11:28.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.986 "strip_size_kb": 64, 00:11:28.986 "state": "configuring", 00:11:28.986 "raid_level": "raid0", 00:11:28.986 "superblock": false, 00:11:28.986 "num_base_bdevs": 3, 00:11:28.986 "num_base_bdevs_discovered": 2, 00:11:28.986 "num_base_bdevs_operational": 3, 00:11:28.986 "base_bdevs_list": [ 00:11:28.986 { 00:11:28.986 "name": "BaseBdev1", 00:11:28.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.986 "is_configured": false, 00:11:28.986 "data_offset": 0, 00:11:28.986 "data_size": 0 00:11:28.986 }, 00:11:28.986 { 00:11:28.986 "name": "BaseBdev2", 00:11:28.986 "uuid": "1f420d93-8ca5-41cb-8828-ba3ac05961c3", 00:11:28.986 "is_configured": true, 00:11:28.986 "data_offset": 0, 00:11:28.986 "data_size": 65536 00:11:28.986 }, 00:11:28.986 { 00:11:28.986 "name": "BaseBdev3", 00:11:28.986 "uuid": "ddd46f19-426c-4958-a6d7-f68298f31d61", 00:11:28.986 "is_configured": true, 00:11:28.986 "data_offset": 0, 00:11:28.986 "data_size": 65536 00:11:28.986 } 00:11:28.986 ] 00:11:28.986 }' 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.986 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.246 [2024-12-06 15:38:12.502466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.246 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.506 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.506 "name": "Existed_Raid", 00:11:29.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.506 "strip_size_kb": 64, 00:11:29.506 "state": "configuring", 00:11:29.506 "raid_level": "raid0", 00:11:29.506 "superblock": false, 00:11:29.506 "num_base_bdevs": 3, 00:11:29.506 "num_base_bdevs_discovered": 1, 00:11:29.506 "num_base_bdevs_operational": 3, 00:11:29.506 "base_bdevs_list": [ 00:11:29.506 { 00:11:29.506 "name": "BaseBdev1", 00:11:29.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.506 "is_configured": false, 00:11:29.506 "data_offset": 0, 00:11:29.506 "data_size": 0 00:11:29.506 }, 00:11:29.506 { 00:11:29.506 "name": null, 00:11:29.506 "uuid": "1f420d93-8ca5-41cb-8828-ba3ac05961c3", 00:11:29.506 "is_configured": false, 00:11:29.506 "data_offset": 0, 00:11:29.506 "data_size": 65536 00:11:29.506 }, 00:11:29.506 { 00:11:29.506 "name": "BaseBdev3", 00:11:29.506 "uuid": "ddd46f19-426c-4958-a6d7-f68298f31d61", 00:11:29.506 "is_configured": true, 00:11:29.506 "data_offset": 0, 00:11:29.506 "data_size": 65536 00:11:29.506 } 00:11:29.506 ] 00:11:29.506 }' 00:11:29.506 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.506 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.766 [2024-12-06 15:38:12.984000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.766 BaseBdev1 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.766 15:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.766 [ 00:11:29.766 { 00:11:29.766 "name": "BaseBdev1", 00:11:29.766 "aliases": [ 00:11:29.766 "5276df87-c7e6-4afa-a7a3-bb85d887d026" 00:11:29.766 ], 00:11:29.766 "product_name": "Malloc disk", 00:11:29.766 "block_size": 512, 00:11:29.766 "num_blocks": 65536, 00:11:29.766 "uuid": "5276df87-c7e6-4afa-a7a3-bb85d887d026", 00:11:29.766 "assigned_rate_limits": { 00:11:29.766 "rw_ios_per_sec": 0, 00:11:29.766 "rw_mbytes_per_sec": 0, 00:11:29.766 "r_mbytes_per_sec": 0, 00:11:29.766 "w_mbytes_per_sec": 0 00:11:29.766 }, 00:11:29.766 "claimed": true, 00:11:29.767 "claim_type": "exclusive_write", 00:11:29.767 "zoned": false, 00:11:29.767 "supported_io_types": { 00:11:29.767 "read": true, 00:11:29.767 "write": true, 00:11:29.767 "unmap": true, 00:11:29.767 "flush": true, 00:11:29.767 "reset": true, 00:11:29.767 "nvme_admin": false, 00:11:29.767 "nvme_io": false, 00:11:29.767 "nvme_io_md": false, 00:11:29.767 "write_zeroes": true, 00:11:29.767 "zcopy": true, 00:11:29.767 "get_zone_info": false, 00:11:29.767 "zone_management": false, 00:11:29.767 "zone_append": false, 00:11:29.767 "compare": false, 00:11:29.767 "compare_and_write": false, 00:11:29.767 "abort": true, 00:11:29.767 "seek_hole": false, 00:11:29.767 "seek_data": false, 00:11:29.767 "copy": true, 00:11:29.767 "nvme_iov_md": false 00:11:29.767 }, 00:11:29.767 "memory_domains": [ 00:11:29.767 { 00:11:29.767 "dma_device_id": "system", 00:11:29.767 "dma_device_type": 1 00:11:29.767 }, 00:11:29.767 { 00:11:29.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.767 "dma_device_type": 2 00:11:29.767 } 00:11:29.767 ], 00:11:29.767 "driver_specific": {} 00:11:29.767 } 00:11:29.767 ] 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.767 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.029 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.029 "name": "Existed_Raid", 00:11:30.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.029 "strip_size_kb": 64, 00:11:30.029 "state": "configuring", 00:11:30.029 "raid_level": "raid0", 00:11:30.029 "superblock": false, 00:11:30.029 "num_base_bdevs": 3, 00:11:30.029 "num_base_bdevs_discovered": 2, 00:11:30.029 "num_base_bdevs_operational": 3, 00:11:30.029 "base_bdevs_list": [ 00:11:30.029 { 00:11:30.029 "name": "BaseBdev1", 00:11:30.029 "uuid": "5276df87-c7e6-4afa-a7a3-bb85d887d026", 00:11:30.029 "is_configured": true, 00:11:30.029 "data_offset": 0, 00:11:30.029 "data_size": 65536 00:11:30.029 }, 00:11:30.029 { 00:11:30.029 "name": null, 00:11:30.029 "uuid": "1f420d93-8ca5-41cb-8828-ba3ac05961c3", 00:11:30.029 "is_configured": false, 00:11:30.029 "data_offset": 0, 00:11:30.029 "data_size": 65536 00:11:30.029 }, 00:11:30.029 { 00:11:30.029 "name": "BaseBdev3", 00:11:30.029 "uuid": "ddd46f19-426c-4958-a6d7-f68298f31d61", 00:11:30.029 "is_configured": true, 00:11:30.029 "data_offset": 0, 00:11:30.029 "data_size": 65536 00:11:30.029 } 00:11:30.029 ] 00:11:30.029 }' 00:11:30.029 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.029 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.288 [2024-12-06 15:38:13.491389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.288 "name": "Existed_Raid", 00:11:30.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.288 "strip_size_kb": 64, 00:11:30.288 "state": "configuring", 00:11:30.288 "raid_level": "raid0", 00:11:30.288 "superblock": false, 00:11:30.288 "num_base_bdevs": 3, 00:11:30.288 "num_base_bdevs_discovered": 1, 00:11:30.288 "num_base_bdevs_operational": 3, 00:11:30.288 "base_bdevs_list": [ 00:11:30.288 { 00:11:30.288 "name": "BaseBdev1", 00:11:30.288 "uuid": "5276df87-c7e6-4afa-a7a3-bb85d887d026", 00:11:30.288 "is_configured": true, 00:11:30.288 "data_offset": 0, 00:11:30.288 "data_size": 65536 00:11:30.288 }, 00:11:30.288 { 00:11:30.288 "name": null, 00:11:30.288 "uuid": "1f420d93-8ca5-41cb-8828-ba3ac05961c3", 00:11:30.288 "is_configured": false, 00:11:30.288 "data_offset": 0, 00:11:30.288 "data_size": 65536 00:11:30.288 }, 00:11:30.288 { 00:11:30.288 "name": null, 00:11:30.288 "uuid": "ddd46f19-426c-4958-a6d7-f68298f31d61", 00:11:30.288 "is_configured": false, 00:11:30.288 "data_offset": 0, 00:11:30.288 "data_size": 65536 00:11:30.288 } 00:11:30.288 ] 00:11:30.288 }' 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.288 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.856 [2024-12-06 15:38:13.970748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.856 15:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.856 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.856 "name": "Existed_Raid", 00:11:30.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.856 "strip_size_kb": 64, 00:11:30.856 "state": "configuring", 00:11:30.856 "raid_level": "raid0", 00:11:30.856 "superblock": false, 00:11:30.856 "num_base_bdevs": 3, 00:11:30.856 "num_base_bdevs_discovered": 2, 00:11:30.856 "num_base_bdevs_operational": 3, 00:11:30.856 "base_bdevs_list": [ 00:11:30.856 { 00:11:30.856 "name": "BaseBdev1", 00:11:30.856 "uuid": "5276df87-c7e6-4afa-a7a3-bb85d887d026", 00:11:30.856 "is_configured": true, 00:11:30.856 "data_offset": 0, 00:11:30.856 "data_size": 65536 00:11:30.856 }, 00:11:30.856 { 00:11:30.856 "name": null, 00:11:30.856 "uuid": "1f420d93-8ca5-41cb-8828-ba3ac05961c3", 00:11:30.856 "is_configured": false, 00:11:30.856 "data_offset": 0, 00:11:30.856 "data_size": 65536 00:11:30.856 }, 00:11:30.856 { 00:11:30.856 "name": "BaseBdev3", 00:11:30.856 "uuid": "ddd46f19-426c-4958-a6d7-f68298f31d61", 00:11:30.856 "is_configured": true, 00:11:30.856 "data_offset": 0, 00:11:30.856 "data_size": 65536 00:11:30.856 } 00:11:30.856 ] 00:11:30.856 }' 00:11:30.856 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.856 15:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.424 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.424 15:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.424 15:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.424 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:31.424 15:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.425 [2024-12-06 15:38:14.482313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.425 "name": "Existed_Raid", 00:11:31.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.425 "strip_size_kb": 64, 00:11:31.425 "state": "configuring", 00:11:31.425 "raid_level": "raid0", 00:11:31.425 "superblock": false, 00:11:31.425 "num_base_bdevs": 3, 00:11:31.425 "num_base_bdevs_discovered": 1, 00:11:31.425 "num_base_bdevs_operational": 3, 00:11:31.425 "base_bdevs_list": [ 00:11:31.425 { 00:11:31.425 "name": null, 00:11:31.425 "uuid": "5276df87-c7e6-4afa-a7a3-bb85d887d026", 00:11:31.425 "is_configured": false, 00:11:31.425 "data_offset": 0, 00:11:31.425 "data_size": 65536 00:11:31.425 }, 00:11:31.425 { 00:11:31.425 "name": null, 00:11:31.425 "uuid": "1f420d93-8ca5-41cb-8828-ba3ac05961c3", 00:11:31.425 "is_configured": false, 00:11:31.425 "data_offset": 0, 00:11:31.425 "data_size": 65536 00:11:31.425 }, 00:11:31.425 { 00:11:31.425 "name": "BaseBdev3", 00:11:31.425 "uuid": "ddd46f19-426c-4958-a6d7-f68298f31d61", 00:11:31.425 "is_configured": true, 00:11:31.425 "data_offset": 0, 00:11:31.425 "data_size": 65536 00:11:31.425 } 00:11:31.425 ] 00:11:31.425 }' 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.425 15:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.999 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.999 15:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:31.999 15:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.999 15:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.999 [2024-12-06 15:38:15.048362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.999 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.999 "name": "Existed_Raid", 00:11:31.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.999 "strip_size_kb": 64, 00:11:31.999 "state": "configuring", 00:11:31.999 "raid_level": "raid0", 00:11:31.999 "superblock": false, 00:11:31.999 "num_base_bdevs": 3, 00:11:31.999 "num_base_bdevs_discovered": 2, 00:11:31.999 "num_base_bdevs_operational": 3, 00:11:31.999 "base_bdevs_list": [ 00:11:31.999 { 00:11:31.999 "name": null, 00:11:31.999 "uuid": "5276df87-c7e6-4afa-a7a3-bb85d887d026", 00:11:31.999 "is_configured": false, 00:11:31.999 "data_offset": 0, 00:11:31.999 "data_size": 65536 00:11:31.999 }, 00:11:31.999 { 00:11:31.999 "name": "BaseBdev2", 00:11:31.999 "uuid": "1f420d93-8ca5-41cb-8828-ba3ac05961c3", 00:11:32.000 "is_configured": true, 00:11:32.000 "data_offset": 0, 00:11:32.000 "data_size": 65536 00:11:32.000 }, 00:11:32.000 { 00:11:32.000 "name": "BaseBdev3", 00:11:32.000 "uuid": "ddd46f19-426c-4958-a6d7-f68298f31d61", 00:11:32.000 "is_configured": true, 00:11:32.000 "data_offset": 0, 00:11:32.000 "data_size": 65536 00:11:32.000 } 00:11:32.000 ] 00:11:32.000 }' 00:11:32.000 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.000 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.259 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.259 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.259 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.259 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.259 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.259 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:32.259 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.259 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:32.259 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.259 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.259 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.517 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5276df87-c7e6-4afa-a7a3-bb85d887d026 00:11:32.517 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.517 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.517 [2024-12-06 15:38:15.616479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:32.517 [2024-12-06 15:38:15.616768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:32.517 [2024-12-06 15:38:15.616798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:32.517 [2024-12-06 15:38:15.617132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:32.517 [2024-12-06 15:38:15.617322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:32.517 [2024-12-06 15:38:15.617333] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:32.517 [2024-12-06 15:38:15.617653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.517 NewBaseBdev 00:11:32.517 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.517 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:32.517 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.518 [ 00:11:32.518 { 00:11:32.518 "name": "NewBaseBdev", 00:11:32.518 "aliases": [ 00:11:32.518 "5276df87-c7e6-4afa-a7a3-bb85d887d026" 00:11:32.518 ], 00:11:32.518 "product_name": "Malloc disk", 00:11:32.518 "block_size": 512, 00:11:32.518 "num_blocks": 65536, 00:11:32.518 "uuid": "5276df87-c7e6-4afa-a7a3-bb85d887d026", 00:11:32.518 "assigned_rate_limits": { 00:11:32.518 "rw_ios_per_sec": 0, 00:11:32.518 "rw_mbytes_per_sec": 0, 00:11:32.518 "r_mbytes_per_sec": 0, 00:11:32.518 "w_mbytes_per_sec": 0 00:11:32.518 }, 00:11:32.518 "claimed": true, 00:11:32.518 "claim_type": "exclusive_write", 00:11:32.518 "zoned": false, 00:11:32.518 "supported_io_types": { 00:11:32.518 "read": true, 00:11:32.518 "write": true, 00:11:32.518 "unmap": true, 00:11:32.518 "flush": true, 00:11:32.518 "reset": true, 00:11:32.518 "nvme_admin": false, 00:11:32.518 "nvme_io": false, 00:11:32.518 "nvme_io_md": false, 00:11:32.518 "write_zeroes": true, 00:11:32.518 "zcopy": true, 00:11:32.518 "get_zone_info": false, 00:11:32.518 "zone_management": false, 00:11:32.518 "zone_append": false, 00:11:32.518 "compare": false, 00:11:32.518 "compare_and_write": false, 00:11:32.518 "abort": true, 00:11:32.518 "seek_hole": false, 00:11:32.518 "seek_data": false, 00:11:32.518 "copy": true, 00:11:32.518 "nvme_iov_md": false 00:11:32.518 }, 00:11:32.518 "memory_domains": [ 00:11:32.518 { 00:11:32.518 "dma_device_id": "system", 00:11:32.518 "dma_device_type": 1 00:11:32.518 }, 00:11:32.518 { 00:11:32.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.518 "dma_device_type": 2 00:11:32.518 } 00:11:32.518 ], 00:11:32.518 "driver_specific": {} 00:11:32.518 } 00:11:32.518 ] 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.518 "name": "Existed_Raid", 00:11:32.518 "uuid": "e5c2608e-8023-407d-888b-91103f28d1cf", 00:11:32.518 "strip_size_kb": 64, 00:11:32.518 "state": "online", 00:11:32.518 "raid_level": "raid0", 00:11:32.518 "superblock": false, 00:11:32.518 "num_base_bdevs": 3, 00:11:32.518 "num_base_bdevs_discovered": 3, 00:11:32.518 "num_base_bdevs_operational": 3, 00:11:32.518 "base_bdevs_list": [ 00:11:32.518 { 00:11:32.518 "name": "NewBaseBdev", 00:11:32.518 "uuid": "5276df87-c7e6-4afa-a7a3-bb85d887d026", 00:11:32.518 "is_configured": true, 00:11:32.518 "data_offset": 0, 00:11:32.518 "data_size": 65536 00:11:32.518 }, 00:11:32.518 { 00:11:32.518 "name": "BaseBdev2", 00:11:32.518 "uuid": "1f420d93-8ca5-41cb-8828-ba3ac05961c3", 00:11:32.518 "is_configured": true, 00:11:32.518 "data_offset": 0, 00:11:32.518 "data_size": 65536 00:11:32.518 }, 00:11:32.518 { 00:11:32.518 "name": "BaseBdev3", 00:11:32.518 "uuid": "ddd46f19-426c-4958-a6d7-f68298f31d61", 00:11:32.518 "is_configured": true, 00:11:32.518 "data_offset": 0, 00:11:32.518 "data_size": 65536 00:11:32.518 } 00:11:32.518 ] 00:11:32.518 }' 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.518 15:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.086 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:33.086 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:33.086 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:33.086 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:33.086 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:33.086 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:33.086 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:33.086 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.086 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:33.086 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.086 [2024-12-06 15:38:16.124159] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.086 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.086 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:33.086 "name": "Existed_Raid", 00:11:33.086 "aliases": [ 00:11:33.086 "e5c2608e-8023-407d-888b-91103f28d1cf" 00:11:33.086 ], 00:11:33.086 "product_name": "Raid Volume", 00:11:33.086 "block_size": 512, 00:11:33.086 "num_blocks": 196608, 00:11:33.086 "uuid": "e5c2608e-8023-407d-888b-91103f28d1cf", 00:11:33.086 "assigned_rate_limits": { 00:11:33.086 "rw_ios_per_sec": 0, 00:11:33.086 "rw_mbytes_per_sec": 0, 00:11:33.086 "r_mbytes_per_sec": 0, 00:11:33.086 "w_mbytes_per_sec": 0 00:11:33.087 }, 00:11:33.087 "claimed": false, 00:11:33.087 "zoned": false, 00:11:33.087 "supported_io_types": { 00:11:33.087 "read": true, 00:11:33.087 "write": true, 00:11:33.087 "unmap": true, 00:11:33.087 "flush": true, 00:11:33.087 "reset": true, 00:11:33.087 "nvme_admin": false, 00:11:33.087 "nvme_io": false, 00:11:33.087 "nvme_io_md": false, 00:11:33.087 "write_zeroes": true, 00:11:33.087 "zcopy": false, 00:11:33.087 "get_zone_info": false, 00:11:33.087 "zone_management": false, 00:11:33.087 "zone_append": false, 00:11:33.087 "compare": false, 00:11:33.087 "compare_and_write": false, 00:11:33.087 "abort": false, 00:11:33.087 "seek_hole": false, 00:11:33.087 "seek_data": false, 00:11:33.087 "copy": false, 00:11:33.087 "nvme_iov_md": false 00:11:33.087 }, 00:11:33.087 "memory_domains": [ 00:11:33.087 { 00:11:33.087 "dma_device_id": "system", 00:11:33.087 "dma_device_type": 1 00:11:33.087 }, 00:11:33.087 { 00:11:33.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.087 "dma_device_type": 2 00:11:33.087 }, 00:11:33.087 { 00:11:33.087 "dma_device_id": "system", 00:11:33.087 "dma_device_type": 1 00:11:33.087 }, 00:11:33.087 { 00:11:33.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.087 "dma_device_type": 2 00:11:33.087 }, 00:11:33.087 { 00:11:33.087 "dma_device_id": "system", 00:11:33.087 "dma_device_type": 1 00:11:33.087 }, 00:11:33.087 { 00:11:33.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.087 "dma_device_type": 2 00:11:33.087 } 00:11:33.087 ], 00:11:33.087 "driver_specific": { 00:11:33.087 "raid": { 00:11:33.087 "uuid": "e5c2608e-8023-407d-888b-91103f28d1cf", 00:11:33.087 "strip_size_kb": 64, 00:11:33.087 "state": "online", 00:11:33.087 "raid_level": "raid0", 00:11:33.087 "superblock": false, 00:11:33.087 "num_base_bdevs": 3, 00:11:33.087 "num_base_bdevs_discovered": 3, 00:11:33.087 "num_base_bdevs_operational": 3, 00:11:33.087 "base_bdevs_list": [ 00:11:33.087 { 00:11:33.087 "name": "NewBaseBdev", 00:11:33.087 "uuid": "5276df87-c7e6-4afa-a7a3-bb85d887d026", 00:11:33.087 "is_configured": true, 00:11:33.087 "data_offset": 0, 00:11:33.087 "data_size": 65536 00:11:33.087 }, 00:11:33.087 { 00:11:33.087 "name": "BaseBdev2", 00:11:33.087 "uuid": "1f420d93-8ca5-41cb-8828-ba3ac05961c3", 00:11:33.087 "is_configured": true, 00:11:33.087 "data_offset": 0, 00:11:33.087 "data_size": 65536 00:11:33.087 }, 00:11:33.087 { 00:11:33.087 "name": "BaseBdev3", 00:11:33.087 "uuid": "ddd46f19-426c-4958-a6d7-f68298f31d61", 00:11:33.087 "is_configured": true, 00:11:33.087 "data_offset": 0, 00:11:33.087 "data_size": 65536 00:11:33.087 } 00:11:33.087 ] 00:11:33.087 } 00:11:33.087 } 00:11:33.087 }' 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:33.087 BaseBdev2 00:11:33.087 BaseBdev3' 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.087 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.346 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.346 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.346 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.346 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.346 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.346 [2024-12-06 15:38:16.391573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.346 [2024-12-06 15:38:16.391613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.346 [2024-12-06 15:38:16.391749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.346 [2024-12-06 15:38:16.391821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.346 [2024-12-06 15:38:16.391839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:33.346 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.347 15:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63845 00:11:33.347 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63845 ']' 00:11:33.347 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63845 00:11:33.347 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:33.347 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.347 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63845 00:11:33.347 killing process with pid 63845 00:11:33.347 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.347 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.347 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63845' 00:11:33.347 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63845 00:11:33.347 [2024-12-06 15:38:16.449747] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.347 15:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63845 00:11:33.629 [2024-12-06 15:38:16.785230] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:35.007 ************************************ 00:11:35.007 END TEST raid_state_function_test 00:11:35.007 ************************************ 00:11:35.007 00:11:35.007 real 0m10.981s 00:11:35.007 user 0m17.096s 00:11:35.007 sys 0m2.304s 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.007 15:38:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:11:35.007 15:38:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:35.007 15:38:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.007 15:38:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.007 ************************************ 00:11:35.007 START TEST raid_state_function_test_sb 00:11:35.007 ************************************ 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64473 00:11:35.007 Process raid pid: 64473 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64473' 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64473 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64473 ']' 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.007 15:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.007 [2024-12-06 15:38:18.250186] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:11:35.007 [2024-12-06 15:38:18.250595] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.266 [2024-12-06 15:38:18.440609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.524 [2024-12-06 15:38:18.589516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.783 [2024-12-06 15:38:18.824206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.783 [2024-12-06 15:38:18.824594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.043 [2024-12-06 15:38:19.128363] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:36.043 [2024-12-06 15:38:19.128460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:36.043 [2024-12-06 15:38:19.128481] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.043 [2024-12-06 15:38:19.128496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.043 [2024-12-06 15:38:19.128514] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.043 [2024-12-06 15:38:19.128528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.043 "name": "Existed_Raid", 00:11:36.043 "uuid": "9e7c54bd-43d1-43e9-b298-ee3d2351c858", 00:11:36.043 "strip_size_kb": 64, 00:11:36.043 "state": "configuring", 00:11:36.043 "raid_level": "raid0", 00:11:36.043 "superblock": true, 00:11:36.043 "num_base_bdevs": 3, 00:11:36.043 "num_base_bdevs_discovered": 0, 00:11:36.043 "num_base_bdevs_operational": 3, 00:11:36.043 "base_bdevs_list": [ 00:11:36.043 { 00:11:36.043 "name": "BaseBdev1", 00:11:36.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.043 "is_configured": false, 00:11:36.043 "data_offset": 0, 00:11:36.043 "data_size": 0 00:11:36.043 }, 00:11:36.043 { 00:11:36.043 "name": "BaseBdev2", 00:11:36.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.043 "is_configured": false, 00:11:36.043 "data_offset": 0, 00:11:36.043 "data_size": 0 00:11:36.043 }, 00:11:36.043 { 00:11:36.043 "name": "BaseBdev3", 00:11:36.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.043 "is_configured": false, 00:11:36.043 "data_offset": 0, 00:11:36.043 "data_size": 0 00:11:36.043 } 00:11:36.043 ] 00:11:36.043 }' 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.043 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.322 [2024-12-06 15:38:19.539748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.322 [2024-12-06 15:38:19.539808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.322 [2024-12-06 15:38:19.547774] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:36.322 [2024-12-06 15:38:19.547846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:36.322 [2024-12-06 15:38:19.547858] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.322 [2024-12-06 15:38:19.547872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.322 [2024-12-06 15:38:19.547880] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.322 [2024-12-06 15:38:19.547894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.322 [2024-12-06 15:38:19.600151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.322 BaseBdev1 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.322 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.323 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.323 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.582 [ 00:11:36.582 { 00:11:36.582 "name": "BaseBdev1", 00:11:36.582 "aliases": [ 00:11:36.582 "03017bc6-c9ef-4a97-bf50-4b98ca74e8f7" 00:11:36.582 ], 00:11:36.582 "product_name": "Malloc disk", 00:11:36.582 "block_size": 512, 00:11:36.582 "num_blocks": 65536, 00:11:36.582 "uuid": "03017bc6-c9ef-4a97-bf50-4b98ca74e8f7", 00:11:36.582 "assigned_rate_limits": { 00:11:36.582 "rw_ios_per_sec": 0, 00:11:36.582 "rw_mbytes_per_sec": 0, 00:11:36.582 "r_mbytes_per_sec": 0, 00:11:36.582 "w_mbytes_per_sec": 0 00:11:36.582 }, 00:11:36.582 "claimed": true, 00:11:36.582 "claim_type": "exclusive_write", 00:11:36.582 "zoned": false, 00:11:36.582 "supported_io_types": { 00:11:36.582 "read": true, 00:11:36.582 "write": true, 00:11:36.582 "unmap": true, 00:11:36.582 "flush": true, 00:11:36.582 "reset": true, 00:11:36.582 "nvme_admin": false, 00:11:36.582 "nvme_io": false, 00:11:36.582 "nvme_io_md": false, 00:11:36.582 "write_zeroes": true, 00:11:36.582 "zcopy": true, 00:11:36.582 "get_zone_info": false, 00:11:36.582 "zone_management": false, 00:11:36.582 "zone_append": false, 00:11:36.582 "compare": false, 00:11:36.582 "compare_and_write": false, 00:11:36.582 "abort": true, 00:11:36.582 "seek_hole": false, 00:11:36.582 "seek_data": false, 00:11:36.582 "copy": true, 00:11:36.582 "nvme_iov_md": false 00:11:36.582 }, 00:11:36.582 "memory_domains": [ 00:11:36.582 { 00:11:36.582 "dma_device_id": "system", 00:11:36.582 "dma_device_type": 1 00:11:36.582 }, 00:11:36.582 { 00:11:36.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.582 "dma_device_type": 2 00:11:36.582 } 00:11:36.582 ], 00:11:36.582 "driver_specific": {} 00:11:36.582 } 00:11:36.582 ] 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.582 "name": "Existed_Raid", 00:11:36.582 "uuid": "e0ff06e4-7d33-4173-b134-b9446c141b85", 00:11:36.582 "strip_size_kb": 64, 00:11:36.582 "state": "configuring", 00:11:36.582 "raid_level": "raid0", 00:11:36.582 "superblock": true, 00:11:36.582 "num_base_bdevs": 3, 00:11:36.582 "num_base_bdevs_discovered": 1, 00:11:36.582 "num_base_bdevs_operational": 3, 00:11:36.582 "base_bdevs_list": [ 00:11:36.582 { 00:11:36.582 "name": "BaseBdev1", 00:11:36.582 "uuid": "03017bc6-c9ef-4a97-bf50-4b98ca74e8f7", 00:11:36.582 "is_configured": true, 00:11:36.582 "data_offset": 2048, 00:11:36.582 "data_size": 63488 00:11:36.582 }, 00:11:36.582 { 00:11:36.582 "name": "BaseBdev2", 00:11:36.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.582 "is_configured": false, 00:11:36.582 "data_offset": 0, 00:11:36.582 "data_size": 0 00:11:36.582 }, 00:11:36.582 { 00:11:36.582 "name": "BaseBdev3", 00:11:36.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.582 "is_configured": false, 00:11:36.582 "data_offset": 0, 00:11:36.582 "data_size": 0 00:11:36.582 } 00:11:36.582 ] 00:11:36.582 }' 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.582 15:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.841 [2024-12-06 15:38:20.055682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.841 [2024-12-06 15:38:20.055769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.841 [2024-12-06 15:38:20.067794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.841 [2024-12-06 15:38:20.070673] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.841 [2024-12-06 15:38:20.070749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.841 [2024-12-06 15:38:20.070763] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.841 [2024-12-06 15:38:20.070777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.841 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.842 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.842 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.842 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.842 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.842 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.842 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.842 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.842 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.842 "name": "Existed_Raid", 00:11:36.842 "uuid": "b6ea2c55-54ed-44b5-a015-f0cc0149e3f4", 00:11:36.842 "strip_size_kb": 64, 00:11:36.842 "state": "configuring", 00:11:36.842 "raid_level": "raid0", 00:11:36.842 "superblock": true, 00:11:36.842 "num_base_bdevs": 3, 00:11:36.842 "num_base_bdevs_discovered": 1, 00:11:36.842 "num_base_bdevs_operational": 3, 00:11:36.842 "base_bdevs_list": [ 00:11:36.842 { 00:11:36.842 "name": "BaseBdev1", 00:11:36.842 "uuid": "03017bc6-c9ef-4a97-bf50-4b98ca74e8f7", 00:11:36.842 "is_configured": true, 00:11:36.842 "data_offset": 2048, 00:11:36.842 "data_size": 63488 00:11:36.842 }, 00:11:36.842 { 00:11:36.842 "name": "BaseBdev2", 00:11:36.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.842 "is_configured": false, 00:11:36.842 "data_offset": 0, 00:11:36.842 "data_size": 0 00:11:36.842 }, 00:11:36.842 { 00:11:36.842 "name": "BaseBdev3", 00:11:36.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.842 "is_configured": false, 00:11:36.842 "data_offset": 0, 00:11:36.842 "data_size": 0 00:11:36.842 } 00:11:36.842 ] 00:11:36.842 }' 00:11:36.842 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.842 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.411 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.411 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.411 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.411 [2024-12-06 15:38:20.521244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.411 BaseBdev2 00:11:37.411 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.411 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:37.411 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.412 [ 00:11:37.412 { 00:11:37.412 "name": "BaseBdev2", 00:11:37.412 "aliases": [ 00:11:37.412 "ea8ef34b-b2d1-4450-91d0-ceec6204f915" 00:11:37.412 ], 00:11:37.412 "product_name": "Malloc disk", 00:11:37.412 "block_size": 512, 00:11:37.412 "num_blocks": 65536, 00:11:37.412 "uuid": "ea8ef34b-b2d1-4450-91d0-ceec6204f915", 00:11:37.412 "assigned_rate_limits": { 00:11:37.412 "rw_ios_per_sec": 0, 00:11:37.412 "rw_mbytes_per_sec": 0, 00:11:37.412 "r_mbytes_per_sec": 0, 00:11:37.412 "w_mbytes_per_sec": 0 00:11:37.412 }, 00:11:37.412 "claimed": true, 00:11:37.412 "claim_type": "exclusive_write", 00:11:37.412 "zoned": false, 00:11:37.412 "supported_io_types": { 00:11:37.412 "read": true, 00:11:37.412 "write": true, 00:11:37.412 "unmap": true, 00:11:37.412 "flush": true, 00:11:37.412 "reset": true, 00:11:37.412 "nvme_admin": false, 00:11:37.412 "nvme_io": false, 00:11:37.412 "nvme_io_md": false, 00:11:37.412 "write_zeroes": true, 00:11:37.412 "zcopy": true, 00:11:37.412 "get_zone_info": false, 00:11:37.412 "zone_management": false, 00:11:37.412 "zone_append": false, 00:11:37.412 "compare": false, 00:11:37.412 "compare_and_write": false, 00:11:37.412 "abort": true, 00:11:37.412 "seek_hole": false, 00:11:37.412 "seek_data": false, 00:11:37.412 "copy": true, 00:11:37.412 "nvme_iov_md": false 00:11:37.412 }, 00:11:37.412 "memory_domains": [ 00:11:37.412 { 00:11:37.412 "dma_device_id": "system", 00:11:37.412 "dma_device_type": 1 00:11:37.412 }, 00:11:37.412 { 00:11:37.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.412 "dma_device_type": 2 00:11:37.412 } 00:11:37.412 ], 00:11:37.412 "driver_specific": {} 00:11:37.412 } 00:11:37.412 ] 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.412 "name": "Existed_Raid", 00:11:37.412 "uuid": "b6ea2c55-54ed-44b5-a015-f0cc0149e3f4", 00:11:37.412 "strip_size_kb": 64, 00:11:37.412 "state": "configuring", 00:11:37.412 "raid_level": "raid0", 00:11:37.412 "superblock": true, 00:11:37.412 "num_base_bdevs": 3, 00:11:37.412 "num_base_bdevs_discovered": 2, 00:11:37.412 "num_base_bdevs_operational": 3, 00:11:37.412 "base_bdevs_list": [ 00:11:37.412 { 00:11:37.412 "name": "BaseBdev1", 00:11:37.412 "uuid": "03017bc6-c9ef-4a97-bf50-4b98ca74e8f7", 00:11:37.412 "is_configured": true, 00:11:37.412 "data_offset": 2048, 00:11:37.412 "data_size": 63488 00:11:37.412 }, 00:11:37.412 { 00:11:37.412 "name": "BaseBdev2", 00:11:37.412 "uuid": "ea8ef34b-b2d1-4450-91d0-ceec6204f915", 00:11:37.412 "is_configured": true, 00:11:37.412 "data_offset": 2048, 00:11:37.412 "data_size": 63488 00:11:37.412 }, 00:11:37.412 { 00:11:37.412 "name": "BaseBdev3", 00:11:37.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.412 "is_configured": false, 00:11:37.412 "data_offset": 0, 00:11:37.412 "data_size": 0 00:11:37.412 } 00:11:37.412 ] 00:11:37.412 }' 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.412 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.671 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.671 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.671 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.930 [2024-12-06 15:38:20.976626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.930 [2024-12-06 15:38:20.976976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:37.930 [2024-12-06 15:38:20.977002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:37.931 [2024-12-06 15:38:20.977327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:37.931 [2024-12-06 15:38:20.977516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:37.931 [2024-12-06 15:38:20.977529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:37.931 [2024-12-06 15:38:20.977696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.931 BaseBdev3 00:11:37.931 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.931 15:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:37.931 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:37.931 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.931 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.931 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.931 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.931 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.931 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.931 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.931 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.931 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.931 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.931 15:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.931 [ 00:11:37.931 { 00:11:37.931 "name": "BaseBdev3", 00:11:37.931 "aliases": [ 00:11:37.931 "0772b4ab-2f0d-48e9-971d-7e16a781e15d" 00:11:37.931 ], 00:11:37.931 "product_name": "Malloc disk", 00:11:37.931 "block_size": 512, 00:11:37.931 "num_blocks": 65536, 00:11:37.931 "uuid": "0772b4ab-2f0d-48e9-971d-7e16a781e15d", 00:11:37.931 "assigned_rate_limits": { 00:11:37.931 "rw_ios_per_sec": 0, 00:11:37.931 "rw_mbytes_per_sec": 0, 00:11:37.931 "r_mbytes_per_sec": 0, 00:11:37.931 "w_mbytes_per_sec": 0 00:11:37.931 }, 00:11:37.931 "claimed": true, 00:11:37.931 "claim_type": "exclusive_write", 00:11:37.931 "zoned": false, 00:11:37.931 "supported_io_types": { 00:11:37.931 "read": true, 00:11:37.931 "write": true, 00:11:37.931 "unmap": true, 00:11:37.931 "flush": true, 00:11:37.931 "reset": true, 00:11:37.931 "nvme_admin": false, 00:11:37.931 "nvme_io": false, 00:11:37.931 "nvme_io_md": false, 00:11:37.931 "write_zeroes": true, 00:11:37.931 "zcopy": true, 00:11:37.931 "get_zone_info": false, 00:11:37.931 "zone_management": false, 00:11:37.931 "zone_append": false, 00:11:37.931 "compare": false, 00:11:37.931 "compare_and_write": false, 00:11:37.931 "abort": true, 00:11:37.931 "seek_hole": false, 00:11:37.931 "seek_data": false, 00:11:37.931 "copy": true, 00:11:37.931 "nvme_iov_md": false 00:11:37.931 }, 00:11:37.931 "memory_domains": [ 00:11:37.931 { 00:11:37.931 "dma_device_id": "system", 00:11:37.931 "dma_device_type": 1 00:11:37.931 }, 00:11:37.931 { 00:11:37.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.931 "dma_device_type": 2 00:11:37.931 } 00:11:37.931 ], 00:11:37.931 "driver_specific": {} 00:11:37.931 } 00:11:37.931 ] 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.931 "name": "Existed_Raid", 00:11:37.931 "uuid": "b6ea2c55-54ed-44b5-a015-f0cc0149e3f4", 00:11:37.931 "strip_size_kb": 64, 00:11:37.931 "state": "online", 00:11:37.931 "raid_level": "raid0", 00:11:37.931 "superblock": true, 00:11:37.931 "num_base_bdevs": 3, 00:11:37.931 "num_base_bdevs_discovered": 3, 00:11:37.931 "num_base_bdevs_operational": 3, 00:11:37.931 "base_bdevs_list": [ 00:11:37.931 { 00:11:37.931 "name": "BaseBdev1", 00:11:37.931 "uuid": "03017bc6-c9ef-4a97-bf50-4b98ca74e8f7", 00:11:37.931 "is_configured": true, 00:11:37.931 "data_offset": 2048, 00:11:37.931 "data_size": 63488 00:11:37.931 }, 00:11:37.931 { 00:11:37.931 "name": "BaseBdev2", 00:11:37.931 "uuid": "ea8ef34b-b2d1-4450-91d0-ceec6204f915", 00:11:37.931 "is_configured": true, 00:11:37.931 "data_offset": 2048, 00:11:37.931 "data_size": 63488 00:11:37.931 }, 00:11:37.931 { 00:11:37.931 "name": "BaseBdev3", 00:11:37.931 "uuid": "0772b4ab-2f0d-48e9-971d-7e16a781e15d", 00:11:37.931 "is_configured": true, 00:11:37.931 "data_offset": 2048, 00:11:37.931 "data_size": 63488 00:11:37.931 } 00:11:37.931 ] 00:11:37.931 }' 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.931 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.498 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:38.498 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:38.498 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:38.498 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:38.498 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:38.498 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:38.498 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:38.498 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.498 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.498 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:38.498 [2024-12-06 15:38:21.492302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.498 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.498 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:38.498 "name": "Existed_Raid", 00:11:38.498 "aliases": [ 00:11:38.498 "b6ea2c55-54ed-44b5-a015-f0cc0149e3f4" 00:11:38.498 ], 00:11:38.498 "product_name": "Raid Volume", 00:11:38.498 "block_size": 512, 00:11:38.498 "num_blocks": 190464, 00:11:38.498 "uuid": "b6ea2c55-54ed-44b5-a015-f0cc0149e3f4", 00:11:38.498 "assigned_rate_limits": { 00:11:38.498 "rw_ios_per_sec": 0, 00:11:38.498 "rw_mbytes_per_sec": 0, 00:11:38.498 "r_mbytes_per_sec": 0, 00:11:38.498 "w_mbytes_per_sec": 0 00:11:38.498 }, 00:11:38.498 "claimed": false, 00:11:38.498 "zoned": false, 00:11:38.498 "supported_io_types": { 00:11:38.498 "read": true, 00:11:38.498 "write": true, 00:11:38.498 "unmap": true, 00:11:38.498 "flush": true, 00:11:38.498 "reset": true, 00:11:38.498 "nvme_admin": false, 00:11:38.498 "nvme_io": false, 00:11:38.498 "nvme_io_md": false, 00:11:38.498 "write_zeroes": true, 00:11:38.498 "zcopy": false, 00:11:38.498 "get_zone_info": false, 00:11:38.498 "zone_management": false, 00:11:38.498 "zone_append": false, 00:11:38.498 "compare": false, 00:11:38.498 "compare_and_write": false, 00:11:38.498 "abort": false, 00:11:38.498 "seek_hole": false, 00:11:38.498 "seek_data": false, 00:11:38.498 "copy": false, 00:11:38.498 "nvme_iov_md": false 00:11:38.498 }, 00:11:38.498 "memory_domains": [ 00:11:38.498 { 00:11:38.498 "dma_device_id": "system", 00:11:38.498 "dma_device_type": 1 00:11:38.498 }, 00:11:38.498 { 00:11:38.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.498 "dma_device_type": 2 00:11:38.498 }, 00:11:38.498 { 00:11:38.498 "dma_device_id": "system", 00:11:38.498 "dma_device_type": 1 00:11:38.498 }, 00:11:38.498 { 00:11:38.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.498 "dma_device_type": 2 00:11:38.498 }, 00:11:38.498 { 00:11:38.498 "dma_device_id": "system", 00:11:38.498 "dma_device_type": 1 00:11:38.498 }, 00:11:38.498 { 00:11:38.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.498 "dma_device_type": 2 00:11:38.498 } 00:11:38.498 ], 00:11:38.498 "driver_specific": { 00:11:38.498 "raid": { 00:11:38.498 "uuid": "b6ea2c55-54ed-44b5-a015-f0cc0149e3f4", 00:11:38.498 "strip_size_kb": 64, 00:11:38.498 "state": "online", 00:11:38.498 "raid_level": "raid0", 00:11:38.498 "superblock": true, 00:11:38.498 "num_base_bdevs": 3, 00:11:38.498 "num_base_bdevs_discovered": 3, 00:11:38.498 "num_base_bdevs_operational": 3, 00:11:38.498 "base_bdevs_list": [ 00:11:38.498 { 00:11:38.498 "name": "BaseBdev1", 00:11:38.498 "uuid": "03017bc6-c9ef-4a97-bf50-4b98ca74e8f7", 00:11:38.498 "is_configured": true, 00:11:38.498 "data_offset": 2048, 00:11:38.498 "data_size": 63488 00:11:38.498 }, 00:11:38.498 { 00:11:38.498 "name": "BaseBdev2", 00:11:38.499 "uuid": "ea8ef34b-b2d1-4450-91d0-ceec6204f915", 00:11:38.499 "is_configured": true, 00:11:38.499 "data_offset": 2048, 00:11:38.499 "data_size": 63488 00:11:38.499 }, 00:11:38.499 { 00:11:38.499 "name": "BaseBdev3", 00:11:38.499 "uuid": "0772b4ab-2f0d-48e9-971d-7e16a781e15d", 00:11:38.499 "is_configured": true, 00:11:38.499 "data_offset": 2048, 00:11:38.499 "data_size": 63488 00:11:38.499 } 00:11:38.499 ] 00:11:38.499 } 00:11:38.499 } 00:11:38.499 }' 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:38.499 BaseBdev2 00:11:38.499 BaseBdev3' 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.499 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.499 [2024-12-06 15:38:21.743744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:38.499 [2024-12-06 15:38:21.743793] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.499 [2024-12-06 15:38:21.743865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.757 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.758 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.758 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.758 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.758 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.758 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.758 "name": "Existed_Raid", 00:11:38.758 "uuid": "b6ea2c55-54ed-44b5-a015-f0cc0149e3f4", 00:11:38.758 "strip_size_kb": 64, 00:11:38.758 "state": "offline", 00:11:38.758 "raid_level": "raid0", 00:11:38.758 "superblock": true, 00:11:38.758 "num_base_bdevs": 3, 00:11:38.758 "num_base_bdevs_discovered": 2, 00:11:38.758 "num_base_bdevs_operational": 2, 00:11:38.758 "base_bdevs_list": [ 00:11:38.758 { 00:11:38.758 "name": null, 00:11:38.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.758 "is_configured": false, 00:11:38.758 "data_offset": 0, 00:11:38.758 "data_size": 63488 00:11:38.758 }, 00:11:38.758 { 00:11:38.758 "name": "BaseBdev2", 00:11:38.758 "uuid": "ea8ef34b-b2d1-4450-91d0-ceec6204f915", 00:11:38.758 "is_configured": true, 00:11:38.758 "data_offset": 2048, 00:11:38.758 "data_size": 63488 00:11:38.758 }, 00:11:38.758 { 00:11:38.758 "name": "BaseBdev3", 00:11:38.758 "uuid": "0772b4ab-2f0d-48e9-971d-7e16a781e15d", 00:11:38.758 "is_configured": true, 00:11:38.758 "data_offset": 2048, 00:11:38.758 "data_size": 63488 00:11:38.758 } 00:11:38.758 ] 00:11:38.758 }' 00:11:38.758 15:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.758 15:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.017 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:39.017 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.017 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.017 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.017 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.017 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.017 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.304 [2024-12-06 15:38:22.332589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.304 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.304 [2024-12-06 15:38:22.494406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.304 [2024-12-06 15:38:22.494487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.563 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.563 BaseBdev2 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.564 [ 00:11:39.564 { 00:11:39.564 "name": "BaseBdev2", 00:11:39.564 "aliases": [ 00:11:39.564 "66640b7b-59f9-47c9-b21a-2df74e98e786" 00:11:39.564 ], 00:11:39.564 "product_name": "Malloc disk", 00:11:39.564 "block_size": 512, 00:11:39.564 "num_blocks": 65536, 00:11:39.564 "uuid": "66640b7b-59f9-47c9-b21a-2df74e98e786", 00:11:39.564 "assigned_rate_limits": { 00:11:39.564 "rw_ios_per_sec": 0, 00:11:39.564 "rw_mbytes_per_sec": 0, 00:11:39.564 "r_mbytes_per_sec": 0, 00:11:39.564 "w_mbytes_per_sec": 0 00:11:39.564 }, 00:11:39.564 "claimed": false, 00:11:39.564 "zoned": false, 00:11:39.564 "supported_io_types": { 00:11:39.564 "read": true, 00:11:39.564 "write": true, 00:11:39.564 "unmap": true, 00:11:39.564 "flush": true, 00:11:39.564 "reset": true, 00:11:39.564 "nvme_admin": false, 00:11:39.564 "nvme_io": false, 00:11:39.564 "nvme_io_md": false, 00:11:39.564 "write_zeroes": true, 00:11:39.564 "zcopy": true, 00:11:39.564 "get_zone_info": false, 00:11:39.564 "zone_management": false, 00:11:39.564 "zone_append": false, 00:11:39.564 "compare": false, 00:11:39.564 "compare_and_write": false, 00:11:39.564 "abort": true, 00:11:39.564 "seek_hole": false, 00:11:39.564 "seek_data": false, 00:11:39.564 "copy": true, 00:11:39.564 "nvme_iov_md": false 00:11:39.564 }, 00:11:39.564 "memory_domains": [ 00:11:39.564 { 00:11:39.564 "dma_device_id": "system", 00:11:39.564 "dma_device_type": 1 00:11:39.564 }, 00:11:39.564 { 00:11:39.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.564 "dma_device_type": 2 00:11:39.564 } 00:11:39.564 ], 00:11:39.564 "driver_specific": {} 00:11:39.564 } 00:11:39.564 ] 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.564 BaseBdev3 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.564 [ 00:11:39.564 { 00:11:39.564 "name": "BaseBdev3", 00:11:39.564 "aliases": [ 00:11:39.564 "c7bb7142-b0fe-4258-a779-866fe65709cc" 00:11:39.564 ], 00:11:39.564 "product_name": "Malloc disk", 00:11:39.564 "block_size": 512, 00:11:39.564 "num_blocks": 65536, 00:11:39.564 "uuid": "c7bb7142-b0fe-4258-a779-866fe65709cc", 00:11:39.564 "assigned_rate_limits": { 00:11:39.564 "rw_ios_per_sec": 0, 00:11:39.564 "rw_mbytes_per_sec": 0, 00:11:39.564 "r_mbytes_per_sec": 0, 00:11:39.564 "w_mbytes_per_sec": 0 00:11:39.564 }, 00:11:39.564 "claimed": false, 00:11:39.564 "zoned": false, 00:11:39.564 "supported_io_types": { 00:11:39.564 "read": true, 00:11:39.564 "write": true, 00:11:39.564 "unmap": true, 00:11:39.564 "flush": true, 00:11:39.564 "reset": true, 00:11:39.564 "nvme_admin": false, 00:11:39.564 "nvme_io": false, 00:11:39.564 "nvme_io_md": false, 00:11:39.564 "write_zeroes": true, 00:11:39.564 "zcopy": true, 00:11:39.564 "get_zone_info": false, 00:11:39.564 "zone_management": false, 00:11:39.564 "zone_append": false, 00:11:39.564 "compare": false, 00:11:39.564 "compare_and_write": false, 00:11:39.564 "abort": true, 00:11:39.564 "seek_hole": false, 00:11:39.564 "seek_data": false, 00:11:39.564 "copy": true, 00:11:39.564 "nvme_iov_md": false 00:11:39.564 }, 00:11:39.564 "memory_domains": [ 00:11:39.564 { 00:11:39.564 "dma_device_id": "system", 00:11:39.564 "dma_device_type": 1 00:11:39.564 }, 00:11:39.564 { 00:11:39.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.564 "dma_device_type": 2 00:11:39.564 } 00:11:39.564 ], 00:11:39.564 "driver_specific": {} 00:11:39.564 } 00:11:39.564 ] 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.564 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.823 [2024-12-06 15:38:22.859457] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:39.823 [2024-12-06 15:38:22.859678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:39.823 [2024-12-06 15:38:22.859794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.823 [2024-12-06 15:38:22.862327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.823 "name": "Existed_Raid", 00:11:39.823 "uuid": "ac569770-44b6-483e-927b-cabd53dcc174", 00:11:39.823 "strip_size_kb": 64, 00:11:39.823 "state": "configuring", 00:11:39.823 "raid_level": "raid0", 00:11:39.823 "superblock": true, 00:11:39.823 "num_base_bdevs": 3, 00:11:39.823 "num_base_bdevs_discovered": 2, 00:11:39.823 "num_base_bdevs_operational": 3, 00:11:39.823 "base_bdevs_list": [ 00:11:39.823 { 00:11:39.823 "name": "BaseBdev1", 00:11:39.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.823 "is_configured": false, 00:11:39.823 "data_offset": 0, 00:11:39.823 "data_size": 0 00:11:39.823 }, 00:11:39.823 { 00:11:39.823 "name": "BaseBdev2", 00:11:39.823 "uuid": "66640b7b-59f9-47c9-b21a-2df74e98e786", 00:11:39.823 "is_configured": true, 00:11:39.823 "data_offset": 2048, 00:11:39.823 "data_size": 63488 00:11:39.823 }, 00:11:39.823 { 00:11:39.823 "name": "BaseBdev3", 00:11:39.823 "uuid": "c7bb7142-b0fe-4258-a779-866fe65709cc", 00:11:39.823 "is_configured": true, 00:11:39.823 "data_offset": 2048, 00:11:39.823 "data_size": 63488 00:11:39.823 } 00:11:39.823 ] 00:11:39.823 }' 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.823 15:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.083 [2024-12-06 15:38:23.294822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.083 "name": "Existed_Raid", 00:11:40.083 "uuid": "ac569770-44b6-483e-927b-cabd53dcc174", 00:11:40.083 "strip_size_kb": 64, 00:11:40.083 "state": "configuring", 00:11:40.083 "raid_level": "raid0", 00:11:40.083 "superblock": true, 00:11:40.083 "num_base_bdevs": 3, 00:11:40.083 "num_base_bdevs_discovered": 1, 00:11:40.083 "num_base_bdevs_operational": 3, 00:11:40.083 "base_bdevs_list": [ 00:11:40.083 { 00:11:40.083 "name": "BaseBdev1", 00:11:40.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.083 "is_configured": false, 00:11:40.083 "data_offset": 0, 00:11:40.083 "data_size": 0 00:11:40.083 }, 00:11:40.083 { 00:11:40.083 "name": null, 00:11:40.083 "uuid": "66640b7b-59f9-47c9-b21a-2df74e98e786", 00:11:40.083 "is_configured": false, 00:11:40.083 "data_offset": 0, 00:11:40.083 "data_size": 63488 00:11:40.083 }, 00:11:40.083 { 00:11:40.083 "name": "BaseBdev3", 00:11:40.083 "uuid": "c7bb7142-b0fe-4258-a779-866fe65709cc", 00:11:40.083 "is_configured": true, 00:11:40.083 "data_offset": 2048, 00:11:40.083 "data_size": 63488 00:11:40.083 } 00:11:40.083 ] 00:11:40.083 }' 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.083 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.651 [2024-12-06 15:38:23.741802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.651 BaseBdev1 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.651 [ 00:11:40.651 { 00:11:40.651 "name": "BaseBdev1", 00:11:40.651 "aliases": [ 00:11:40.651 "f3087848-15e4-478a-aef6-4c66f38d420c" 00:11:40.651 ], 00:11:40.651 "product_name": "Malloc disk", 00:11:40.651 "block_size": 512, 00:11:40.651 "num_blocks": 65536, 00:11:40.651 "uuid": "f3087848-15e4-478a-aef6-4c66f38d420c", 00:11:40.651 "assigned_rate_limits": { 00:11:40.651 "rw_ios_per_sec": 0, 00:11:40.651 "rw_mbytes_per_sec": 0, 00:11:40.651 "r_mbytes_per_sec": 0, 00:11:40.651 "w_mbytes_per_sec": 0 00:11:40.651 }, 00:11:40.651 "claimed": true, 00:11:40.651 "claim_type": "exclusive_write", 00:11:40.651 "zoned": false, 00:11:40.651 "supported_io_types": { 00:11:40.651 "read": true, 00:11:40.651 "write": true, 00:11:40.651 "unmap": true, 00:11:40.651 "flush": true, 00:11:40.651 "reset": true, 00:11:40.651 "nvme_admin": false, 00:11:40.651 "nvme_io": false, 00:11:40.651 "nvme_io_md": false, 00:11:40.651 "write_zeroes": true, 00:11:40.651 "zcopy": true, 00:11:40.651 "get_zone_info": false, 00:11:40.651 "zone_management": false, 00:11:40.651 "zone_append": false, 00:11:40.651 "compare": false, 00:11:40.651 "compare_and_write": false, 00:11:40.651 "abort": true, 00:11:40.651 "seek_hole": false, 00:11:40.651 "seek_data": false, 00:11:40.651 "copy": true, 00:11:40.651 "nvme_iov_md": false 00:11:40.651 }, 00:11:40.651 "memory_domains": [ 00:11:40.651 { 00:11:40.651 "dma_device_id": "system", 00:11:40.651 "dma_device_type": 1 00:11:40.651 }, 00:11:40.651 { 00:11:40.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.651 "dma_device_type": 2 00:11:40.651 } 00:11:40.651 ], 00:11:40.651 "driver_specific": {} 00:11:40.651 } 00:11:40.651 ] 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.651 "name": "Existed_Raid", 00:11:40.651 "uuid": "ac569770-44b6-483e-927b-cabd53dcc174", 00:11:40.651 "strip_size_kb": 64, 00:11:40.651 "state": "configuring", 00:11:40.651 "raid_level": "raid0", 00:11:40.651 "superblock": true, 00:11:40.651 "num_base_bdevs": 3, 00:11:40.651 "num_base_bdevs_discovered": 2, 00:11:40.651 "num_base_bdevs_operational": 3, 00:11:40.651 "base_bdevs_list": [ 00:11:40.651 { 00:11:40.651 "name": "BaseBdev1", 00:11:40.651 "uuid": "f3087848-15e4-478a-aef6-4c66f38d420c", 00:11:40.651 "is_configured": true, 00:11:40.651 "data_offset": 2048, 00:11:40.651 "data_size": 63488 00:11:40.651 }, 00:11:40.651 { 00:11:40.651 "name": null, 00:11:40.651 "uuid": "66640b7b-59f9-47c9-b21a-2df74e98e786", 00:11:40.651 "is_configured": false, 00:11:40.651 "data_offset": 0, 00:11:40.651 "data_size": 63488 00:11:40.651 }, 00:11:40.651 { 00:11:40.651 "name": "BaseBdev3", 00:11:40.651 "uuid": "c7bb7142-b0fe-4258-a779-866fe65709cc", 00:11:40.651 "is_configured": true, 00:11:40.651 "data_offset": 2048, 00:11:40.651 "data_size": 63488 00:11:40.651 } 00:11:40.651 ] 00:11:40.651 }' 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.651 15:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.910 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.910 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.910 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.910 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.169 [2024-12-06 15:38:24.253197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.169 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.169 "name": "Existed_Raid", 00:11:41.169 "uuid": "ac569770-44b6-483e-927b-cabd53dcc174", 00:11:41.169 "strip_size_kb": 64, 00:11:41.169 "state": "configuring", 00:11:41.169 "raid_level": "raid0", 00:11:41.169 "superblock": true, 00:11:41.169 "num_base_bdevs": 3, 00:11:41.169 "num_base_bdevs_discovered": 1, 00:11:41.169 "num_base_bdevs_operational": 3, 00:11:41.169 "base_bdevs_list": [ 00:11:41.169 { 00:11:41.169 "name": "BaseBdev1", 00:11:41.169 "uuid": "f3087848-15e4-478a-aef6-4c66f38d420c", 00:11:41.169 "is_configured": true, 00:11:41.169 "data_offset": 2048, 00:11:41.169 "data_size": 63488 00:11:41.169 }, 00:11:41.169 { 00:11:41.169 "name": null, 00:11:41.169 "uuid": "66640b7b-59f9-47c9-b21a-2df74e98e786", 00:11:41.169 "is_configured": false, 00:11:41.170 "data_offset": 0, 00:11:41.170 "data_size": 63488 00:11:41.170 }, 00:11:41.170 { 00:11:41.170 "name": null, 00:11:41.170 "uuid": "c7bb7142-b0fe-4258-a779-866fe65709cc", 00:11:41.170 "is_configured": false, 00:11:41.170 "data_offset": 0, 00:11:41.170 "data_size": 63488 00:11:41.170 } 00:11:41.170 ] 00:11:41.170 }' 00:11:41.170 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.170 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.428 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:41.429 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.429 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.429 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.429 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.687 [2024-12-06 15:38:24.728700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.687 "name": "Existed_Raid", 00:11:41.687 "uuid": "ac569770-44b6-483e-927b-cabd53dcc174", 00:11:41.687 "strip_size_kb": 64, 00:11:41.687 "state": "configuring", 00:11:41.687 "raid_level": "raid0", 00:11:41.687 "superblock": true, 00:11:41.687 "num_base_bdevs": 3, 00:11:41.687 "num_base_bdevs_discovered": 2, 00:11:41.687 "num_base_bdevs_operational": 3, 00:11:41.687 "base_bdevs_list": [ 00:11:41.687 { 00:11:41.687 "name": "BaseBdev1", 00:11:41.687 "uuid": "f3087848-15e4-478a-aef6-4c66f38d420c", 00:11:41.687 "is_configured": true, 00:11:41.687 "data_offset": 2048, 00:11:41.687 "data_size": 63488 00:11:41.687 }, 00:11:41.687 { 00:11:41.687 "name": null, 00:11:41.687 "uuid": "66640b7b-59f9-47c9-b21a-2df74e98e786", 00:11:41.687 "is_configured": false, 00:11:41.687 "data_offset": 0, 00:11:41.687 "data_size": 63488 00:11:41.687 }, 00:11:41.687 { 00:11:41.687 "name": "BaseBdev3", 00:11:41.687 "uuid": "c7bb7142-b0fe-4258-a779-866fe65709cc", 00:11:41.687 "is_configured": true, 00:11:41.687 "data_offset": 2048, 00:11:41.687 "data_size": 63488 00:11:41.687 } 00:11:41.687 ] 00:11:41.687 }' 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.687 15:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.947 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.947 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.947 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.947 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:41.947 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.947 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:41.947 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:41.947 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.947 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.947 [2024-12-06 15:38:25.160722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.208 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.208 "name": "Existed_Raid", 00:11:42.208 "uuid": "ac569770-44b6-483e-927b-cabd53dcc174", 00:11:42.208 "strip_size_kb": 64, 00:11:42.209 "state": "configuring", 00:11:42.209 "raid_level": "raid0", 00:11:42.209 "superblock": true, 00:11:42.209 "num_base_bdevs": 3, 00:11:42.209 "num_base_bdevs_discovered": 1, 00:11:42.209 "num_base_bdevs_operational": 3, 00:11:42.209 "base_bdevs_list": [ 00:11:42.209 { 00:11:42.209 "name": null, 00:11:42.209 "uuid": "f3087848-15e4-478a-aef6-4c66f38d420c", 00:11:42.209 "is_configured": false, 00:11:42.209 "data_offset": 0, 00:11:42.209 "data_size": 63488 00:11:42.209 }, 00:11:42.209 { 00:11:42.209 "name": null, 00:11:42.209 "uuid": "66640b7b-59f9-47c9-b21a-2df74e98e786", 00:11:42.209 "is_configured": false, 00:11:42.209 "data_offset": 0, 00:11:42.209 "data_size": 63488 00:11:42.209 }, 00:11:42.209 { 00:11:42.209 "name": "BaseBdev3", 00:11:42.209 "uuid": "c7bb7142-b0fe-4258-a779-866fe65709cc", 00:11:42.209 "is_configured": true, 00:11:42.209 "data_offset": 2048, 00:11:42.209 "data_size": 63488 00:11:42.209 } 00:11:42.209 ] 00:11:42.209 }' 00:11:42.209 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.209 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.466 [2024-12-06 15:38:25.687942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.466 "name": "Existed_Raid", 00:11:42.466 "uuid": "ac569770-44b6-483e-927b-cabd53dcc174", 00:11:42.466 "strip_size_kb": 64, 00:11:42.466 "state": "configuring", 00:11:42.466 "raid_level": "raid0", 00:11:42.466 "superblock": true, 00:11:42.466 "num_base_bdevs": 3, 00:11:42.466 "num_base_bdevs_discovered": 2, 00:11:42.466 "num_base_bdevs_operational": 3, 00:11:42.466 "base_bdevs_list": [ 00:11:42.466 { 00:11:42.466 "name": null, 00:11:42.466 "uuid": "f3087848-15e4-478a-aef6-4c66f38d420c", 00:11:42.466 "is_configured": false, 00:11:42.466 "data_offset": 0, 00:11:42.466 "data_size": 63488 00:11:42.466 }, 00:11:42.466 { 00:11:42.466 "name": "BaseBdev2", 00:11:42.466 "uuid": "66640b7b-59f9-47c9-b21a-2df74e98e786", 00:11:42.466 "is_configured": true, 00:11:42.466 "data_offset": 2048, 00:11:42.466 "data_size": 63488 00:11:42.466 }, 00:11:42.466 { 00:11:42.466 "name": "BaseBdev3", 00:11:42.466 "uuid": "c7bb7142-b0fe-4258-a779-866fe65709cc", 00:11:42.466 "is_configured": true, 00:11:42.466 "data_offset": 2048, 00:11:42.466 "data_size": 63488 00:11:42.466 } 00:11:42.466 ] 00:11:42.466 }' 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.466 15:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f3087848-15e4-478a-aef6-4c66f38d420c 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.029 [2024-12-06 15:38:26.251664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:43.029 [2024-12-06 15:38:26.251930] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:43.029 [2024-12-06 15:38:26.251951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:43.029 [2024-12-06 15:38:26.252252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:43.029 NewBaseBdev 00:11:43.029 [2024-12-06 15:38:26.252423] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:43.029 [2024-12-06 15:38:26.252435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:43.029 [2024-12-06 15:38:26.252596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.029 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.029 [ 00:11:43.029 { 00:11:43.029 "name": "NewBaseBdev", 00:11:43.029 "aliases": [ 00:11:43.029 "f3087848-15e4-478a-aef6-4c66f38d420c" 00:11:43.029 ], 00:11:43.029 "product_name": "Malloc disk", 00:11:43.029 "block_size": 512, 00:11:43.029 "num_blocks": 65536, 00:11:43.029 "uuid": "f3087848-15e4-478a-aef6-4c66f38d420c", 00:11:43.029 "assigned_rate_limits": { 00:11:43.029 "rw_ios_per_sec": 0, 00:11:43.029 "rw_mbytes_per_sec": 0, 00:11:43.029 "r_mbytes_per_sec": 0, 00:11:43.029 "w_mbytes_per_sec": 0 00:11:43.029 }, 00:11:43.029 "claimed": true, 00:11:43.029 "claim_type": "exclusive_write", 00:11:43.029 "zoned": false, 00:11:43.029 "supported_io_types": { 00:11:43.029 "read": true, 00:11:43.029 "write": true, 00:11:43.029 "unmap": true, 00:11:43.029 "flush": true, 00:11:43.029 "reset": true, 00:11:43.029 "nvme_admin": false, 00:11:43.029 "nvme_io": false, 00:11:43.029 "nvme_io_md": false, 00:11:43.029 "write_zeroes": true, 00:11:43.029 "zcopy": true, 00:11:43.029 "get_zone_info": false, 00:11:43.029 "zone_management": false, 00:11:43.029 "zone_append": false, 00:11:43.029 "compare": false, 00:11:43.029 "compare_and_write": false, 00:11:43.029 "abort": true, 00:11:43.029 "seek_hole": false, 00:11:43.029 "seek_data": false, 00:11:43.029 "copy": true, 00:11:43.029 "nvme_iov_md": false 00:11:43.029 }, 00:11:43.029 "memory_domains": [ 00:11:43.029 { 00:11:43.029 "dma_device_id": "system", 00:11:43.029 "dma_device_type": 1 00:11:43.029 }, 00:11:43.029 { 00:11:43.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.030 "dma_device_type": 2 00:11:43.030 } 00:11:43.030 ], 00:11:43.030 "driver_specific": {} 00:11:43.030 } 00:11:43.030 ] 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.030 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.286 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.286 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.286 "name": "Existed_Raid", 00:11:43.286 "uuid": "ac569770-44b6-483e-927b-cabd53dcc174", 00:11:43.286 "strip_size_kb": 64, 00:11:43.286 "state": "online", 00:11:43.286 "raid_level": "raid0", 00:11:43.286 "superblock": true, 00:11:43.286 "num_base_bdevs": 3, 00:11:43.286 "num_base_bdevs_discovered": 3, 00:11:43.286 "num_base_bdevs_operational": 3, 00:11:43.286 "base_bdevs_list": [ 00:11:43.286 { 00:11:43.286 "name": "NewBaseBdev", 00:11:43.286 "uuid": "f3087848-15e4-478a-aef6-4c66f38d420c", 00:11:43.286 "is_configured": true, 00:11:43.286 "data_offset": 2048, 00:11:43.286 "data_size": 63488 00:11:43.286 }, 00:11:43.286 { 00:11:43.286 "name": "BaseBdev2", 00:11:43.286 "uuid": "66640b7b-59f9-47c9-b21a-2df74e98e786", 00:11:43.286 "is_configured": true, 00:11:43.286 "data_offset": 2048, 00:11:43.286 "data_size": 63488 00:11:43.286 }, 00:11:43.286 { 00:11:43.286 "name": "BaseBdev3", 00:11:43.286 "uuid": "c7bb7142-b0fe-4258-a779-866fe65709cc", 00:11:43.286 "is_configured": true, 00:11:43.286 "data_offset": 2048, 00:11:43.286 "data_size": 63488 00:11:43.286 } 00:11:43.286 ] 00:11:43.286 }' 00:11:43.286 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.286 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.544 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:43.544 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:43.544 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:43.544 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:43.544 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:43.544 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:43.544 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:43.544 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:43.544 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.544 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.544 [2024-12-06 15:38:26.732002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.544 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.544 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:43.544 "name": "Existed_Raid", 00:11:43.544 "aliases": [ 00:11:43.544 "ac569770-44b6-483e-927b-cabd53dcc174" 00:11:43.544 ], 00:11:43.544 "product_name": "Raid Volume", 00:11:43.544 "block_size": 512, 00:11:43.544 "num_blocks": 190464, 00:11:43.544 "uuid": "ac569770-44b6-483e-927b-cabd53dcc174", 00:11:43.544 "assigned_rate_limits": { 00:11:43.544 "rw_ios_per_sec": 0, 00:11:43.544 "rw_mbytes_per_sec": 0, 00:11:43.544 "r_mbytes_per_sec": 0, 00:11:43.544 "w_mbytes_per_sec": 0 00:11:43.544 }, 00:11:43.544 "claimed": false, 00:11:43.544 "zoned": false, 00:11:43.544 "supported_io_types": { 00:11:43.544 "read": true, 00:11:43.544 "write": true, 00:11:43.544 "unmap": true, 00:11:43.544 "flush": true, 00:11:43.544 "reset": true, 00:11:43.544 "nvme_admin": false, 00:11:43.544 "nvme_io": false, 00:11:43.544 "nvme_io_md": false, 00:11:43.544 "write_zeroes": true, 00:11:43.544 "zcopy": false, 00:11:43.544 "get_zone_info": false, 00:11:43.544 "zone_management": false, 00:11:43.544 "zone_append": false, 00:11:43.544 "compare": false, 00:11:43.544 "compare_and_write": false, 00:11:43.544 "abort": false, 00:11:43.544 "seek_hole": false, 00:11:43.544 "seek_data": false, 00:11:43.544 "copy": false, 00:11:43.544 "nvme_iov_md": false 00:11:43.544 }, 00:11:43.544 "memory_domains": [ 00:11:43.544 { 00:11:43.544 "dma_device_id": "system", 00:11:43.544 "dma_device_type": 1 00:11:43.544 }, 00:11:43.544 { 00:11:43.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.544 "dma_device_type": 2 00:11:43.544 }, 00:11:43.544 { 00:11:43.544 "dma_device_id": "system", 00:11:43.544 "dma_device_type": 1 00:11:43.544 }, 00:11:43.544 { 00:11:43.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.544 "dma_device_type": 2 00:11:43.544 }, 00:11:43.544 { 00:11:43.544 "dma_device_id": "system", 00:11:43.544 "dma_device_type": 1 00:11:43.544 }, 00:11:43.544 { 00:11:43.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.544 "dma_device_type": 2 00:11:43.544 } 00:11:43.544 ], 00:11:43.544 "driver_specific": { 00:11:43.544 "raid": { 00:11:43.544 "uuid": "ac569770-44b6-483e-927b-cabd53dcc174", 00:11:43.544 "strip_size_kb": 64, 00:11:43.544 "state": "online", 00:11:43.544 "raid_level": "raid0", 00:11:43.544 "superblock": true, 00:11:43.544 "num_base_bdevs": 3, 00:11:43.544 "num_base_bdevs_discovered": 3, 00:11:43.544 "num_base_bdevs_operational": 3, 00:11:43.544 "base_bdevs_list": [ 00:11:43.544 { 00:11:43.544 "name": "NewBaseBdev", 00:11:43.544 "uuid": "f3087848-15e4-478a-aef6-4c66f38d420c", 00:11:43.544 "is_configured": true, 00:11:43.544 "data_offset": 2048, 00:11:43.544 "data_size": 63488 00:11:43.544 }, 00:11:43.544 { 00:11:43.544 "name": "BaseBdev2", 00:11:43.544 "uuid": "66640b7b-59f9-47c9-b21a-2df74e98e786", 00:11:43.544 "is_configured": true, 00:11:43.544 "data_offset": 2048, 00:11:43.544 "data_size": 63488 00:11:43.544 }, 00:11:43.544 { 00:11:43.544 "name": "BaseBdev3", 00:11:43.544 "uuid": "c7bb7142-b0fe-4258-a779-866fe65709cc", 00:11:43.544 "is_configured": true, 00:11:43.544 "data_offset": 2048, 00:11:43.544 "data_size": 63488 00:11:43.544 } 00:11:43.544 ] 00:11:43.544 } 00:11:43.544 } 00:11:43.544 }' 00:11:43.544 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:43.544 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:43.544 BaseBdev2 00:11:43.544 BaseBdev3' 00:11:43.544 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.802 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.803 [2024-12-06 15:38:26.991697] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:43.803 [2024-12-06 15:38:26.991736] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.803 [2024-12-06 15:38:26.991852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.803 [2024-12-06 15:38:26.991924] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.803 [2024-12-06 15:38:26.991941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64473 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64473 ']' 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64473 00:11:43.803 15:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:43.803 15:38:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.803 15:38:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64473 00:11:43.803 killing process with pid 64473 00:11:43.803 15:38:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.803 15:38:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.803 15:38:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64473' 00:11:43.803 15:38:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64473 00:11:43.803 [2024-12-06 15:38:27.047973] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.803 15:38:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64473 00:11:44.369 [2024-12-06 15:38:27.384529] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:45.746 15:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:45.746 00:11:45.746 real 0m10.520s 00:11:45.746 user 0m16.324s 00:11:45.746 sys 0m2.224s 00:11:45.746 15:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.746 15:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.746 ************************************ 00:11:45.746 END TEST raid_state_function_test_sb 00:11:45.746 ************************************ 00:11:45.746 15:38:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:11:45.746 15:38:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:45.746 15:38:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.746 15:38:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.746 ************************************ 00:11:45.746 START TEST raid_superblock_test 00:11:45.746 ************************************ 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:45.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65093 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65093 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65093 ']' 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.746 15:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.746 [2024-12-06 15:38:28.841108] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:11:45.746 [2024-12-06 15:38:28.841276] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65093 ] 00:11:45.746 [2024-12-06 15:38:29.017588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.004 [2024-12-06 15:38:29.156675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.262 [2024-12-06 15:38:29.403604] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.262 [2024-12-06 15:38:29.403663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.521 malloc1 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.521 [2024-12-06 15:38:29.756396] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:46.521 [2024-12-06 15:38:29.756619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.521 [2024-12-06 15:38:29.756661] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:46.521 [2024-12-06 15:38:29.756674] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.521 [2024-12-06 15:38:29.759447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.521 [2024-12-06 15:38:29.759491] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:46.521 pt1 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:46.521 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:46.522 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:46.522 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:46.522 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.522 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.522 malloc2 00:11:46.522 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.522 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.522 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.522 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.782 [2024-12-06 15:38:29.818862] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.782 [2024-12-06 15:38:29.819051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.782 [2024-12-06 15:38:29.819096] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:46.782 [2024-12-06 15:38:29.819109] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.782 [2024-12-06 15:38:29.821821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.782 [2024-12-06 15:38:29.821859] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.782 pt2 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.782 malloc3 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.782 [2024-12-06 15:38:29.896295] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:46.782 [2024-12-06 15:38:29.896467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.782 [2024-12-06 15:38:29.896551] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:46.782 [2024-12-06 15:38:29.896630] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.782 [2024-12-06 15:38:29.899348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.782 [2024-12-06 15:38:29.899511] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:46.782 pt3 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.782 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.782 [2024-12-06 15:38:29.908415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:46.782 [2024-12-06 15:38:29.910826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.782 [2024-12-06 15:38:29.910899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:46.782 [2024-12-06 15:38:29.911065] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:46.782 [2024-12-06 15:38:29.911081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:46.783 [2024-12-06 15:38:29.911352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:46.783 [2024-12-06 15:38:29.911550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:46.783 [2024-12-06 15:38:29.911563] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:46.783 [2024-12-06 15:38:29.911714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.783 "name": "raid_bdev1", 00:11:46.783 "uuid": "2d8073cc-54b5-4dfc-818a-7321fb313713", 00:11:46.783 "strip_size_kb": 64, 00:11:46.783 "state": "online", 00:11:46.783 "raid_level": "raid0", 00:11:46.783 "superblock": true, 00:11:46.783 "num_base_bdevs": 3, 00:11:46.783 "num_base_bdevs_discovered": 3, 00:11:46.783 "num_base_bdevs_operational": 3, 00:11:46.783 "base_bdevs_list": [ 00:11:46.783 { 00:11:46.783 "name": "pt1", 00:11:46.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.783 "is_configured": true, 00:11:46.783 "data_offset": 2048, 00:11:46.783 "data_size": 63488 00:11:46.783 }, 00:11:46.783 { 00:11:46.783 "name": "pt2", 00:11:46.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.783 "is_configured": true, 00:11:46.783 "data_offset": 2048, 00:11:46.783 "data_size": 63488 00:11:46.783 }, 00:11:46.783 { 00:11:46.783 "name": "pt3", 00:11:46.783 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.783 "is_configured": true, 00:11:46.783 "data_offset": 2048, 00:11:46.783 "data_size": 63488 00:11:46.783 } 00:11:46.783 ] 00:11:46.783 }' 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.783 15:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.352 [2024-12-06 15:38:30.356099] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.352 "name": "raid_bdev1", 00:11:47.352 "aliases": [ 00:11:47.352 "2d8073cc-54b5-4dfc-818a-7321fb313713" 00:11:47.352 ], 00:11:47.352 "product_name": "Raid Volume", 00:11:47.352 "block_size": 512, 00:11:47.352 "num_blocks": 190464, 00:11:47.352 "uuid": "2d8073cc-54b5-4dfc-818a-7321fb313713", 00:11:47.352 "assigned_rate_limits": { 00:11:47.352 "rw_ios_per_sec": 0, 00:11:47.352 "rw_mbytes_per_sec": 0, 00:11:47.352 "r_mbytes_per_sec": 0, 00:11:47.352 "w_mbytes_per_sec": 0 00:11:47.352 }, 00:11:47.352 "claimed": false, 00:11:47.352 "zoned": false, 00:11:47.352 "supported_io_types": { 00:11:47.352 "read": true, 00:11:47.352 "write": true, 00:11:47.352 "unmap": true, 00:11:47.352 "flush": true, 00:11:47.352 "reset": true, 00:11:47.352 "nvme_admin": false, 00:11:47.352 "nvme_io": false, 00:11:47.352 "nvme_io_md": false, 00:11:47.352 "write_zeroes": true, 00:11:47.352 "zcopy": false, 00:11:47.352 "get_zone_info": false, 00:11:47.352 "zone_management": false, 00:11:47.352 "zone_append": false, 00:11:47.352 "compare": false, 00:11:47.352 "compare_and_write": false, 00:11:47.352 "abort": false, 00:11:47.352 "seek_hole": false, 00:11:47.352 "seek_data": false, 00:11:47.352 "copy": false, 00:11:47.352 "nvme_iov_md": false 00:11:47.352 }, 00:11:47.352 "memory_domains": [ 00:11:47.352 { 00:11:47.352 "dma_device_id": "system", 00:11:47.352 "dma_device_type": 1 00:11:47.352 }, 00:11:47.352 { 00:11:47.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.352 "dma_device_type": 2 00:11:47.352 }, 00:11:47.352 { 00:11:47.352 "dma_device_id": "system", 00:11:47.352 "dma_device_type": 1 00:11:47.352 }, 00:11:47.352 { 00:11:47.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.352 "dma_device_type": 2 00:11:47.352 }, 00:11:47.352 { 00:11:47.352 "dma_device_id": "system", 00:11:47.352 "dma_device_type": 1 00:11:47.352 }, 00:11:47.352 { 00:11:47.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.352 "dma_device_type": 2 00:11:47.352 } 00:11:47.352 ], 00:11:47.352 "driver_specific": { 00:11:47.352 "raid": { 00:11:47.352 "uuid": "2d8073cc-54b5-4dfc-818a-7321fb313713", 00:11:47.352 "strip_size_kb": 64, 00:11:47.352 "state": "online", 00:11:47.352 "raid_level": "raid0", 00:11:47.352 "superblock": true, 00:11:47.352 "num_base_bdevs": 3, 00:11:47.352 "num_base_bdevs_discovered": 3, 00:11:47.352 "num_base_bdevs_operational": 3, 00:11:47.352 "base_bdevs_list": [ 00:11:47.352 { 00:11:47.352 "name": "pt1", 00:11:47.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.352 "is_configured": true, 00:11:47.352 "data_offset": 2048, 00:11:47.352 "data_size": 63488 00:11:47.352 }, 00:11:47.352 { 00:11:47.352 "name": "pt2", 00:11:47.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.352 "is_configured": true, 00:11:47.352 "data_offset": 2048, 00:11:47.352 "data_size": 63488 00:11:47.352 }, 00:11:47.352 { 00:11:47.352 "name": "pt3", 00:11:47.352 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.352 "is_configured": true, 00:11:47.352 "data_offset": 2048, 00:11:47.352 "data_size": 63488 00:11:47.352 } 00:11:47.352 ] 00:11:47.352 } 00:11:47.352 } 00:11:47.352 }' 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:47.352 pt2 00:11:47.352 pt3' 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.352 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:47.353 [2024-12-06 15:38:30.619820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.353 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2d8073cc-54b5-4dfc-818a-7321fb313713 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2d8073cc-54b5-4dfc-818a-7321fb313713 ']' 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.613 [2024-12-06 15:38:30.659520] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.613 [2024-12-06 15:38:30.659557] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.613 [2024-12-06 15:38:30.659654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.613 [2024-12-06 15:38:30.659727] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.613 [2024-12-06 15:38:30.659739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.613 [2024-12-06 15:38:30.799348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:47.613 [2024-12-06 15:38:30.801918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:47.613 [2024-12-06 15:38:30.801978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:47.613 [2024-12-06 15:38:30.802042] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:47.613 [2024-12-06 15:38:30.802114] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:47.613 [2024-12-06 15:38:30.802138] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:47.613 [2024-12-06 15:38:30.802160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.613 [2024-12-06 15:38:30.802175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:47.613 request: 00:11:47.613 { 00:11:47.613 "name": "raid_bdev1", 00:11:47.613 "raid_level": "raid0", 00:11:47.613 "base_bdevs": [ 00:11:47.613 "malloc1", 00:11:47.613 "malloc2", 00:11:47.613 "malloc3" 00:11:47.613 ], 00:11:47.613 "strip_size_kb": 64, 00:11:47.613 "superblock": false, 00:11:47.613 "method": "bdev_raid_create", 00:11:47.613 "req_id": 1 00:11:47.613 } 00:11:47.613 Got JSON-RPC error response 00:11:47.613 response: 00:11:47.613 { 00:11:47.613 "code": -17, 00:11:47.613 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:47.613 } 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:47.613 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.614 [2024-12-06 15:38:30.871253] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:47.614 [2024-12-06 15:38:30.871349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.614 [2024-12-06 15:38:30.871379] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:47.614 [2024-12-06 15:38:30.871391] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.614 [2024-12-06 15:38:30.874357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.614 [2024-12-06 15:38:30.874532] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:47.614 [2024-12-06 15:38:30.874686] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:47.614 [2024-12-06 15:38:30.874759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:47.614 pt1 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.614 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.902 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.902 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.902 "name": "raid_bdev1", 00:11:47.902 "uuid": "2d8073cc-54b5-4dfc-818a-7321fb313713", 00:11:47.902 "strip_size_kb": 64, 00:11:47.902 "state": "configuring", 00:11:47.902 "raid_level": "raid0", 00:11:47.902 "superblock": true, 00:11:47.902 "num_base_bdevs": 3, 00:11:47.902 "num_base_bdevs_discovered": 1, 00:11:47.902 "num_base_bdevs_operational": 3, 00:11:47.902 "base_bdevs_list": [ 00:11:47.902 { 00:11:47.902 "name": "pt1", 00:11:47.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.902 "is_configured": true, 00:11:47.902 "data_offset": 2048, 00:11:47.902 "data_size": 63488 00:11:47.902 }, 00:11:47.902 { 00:11:47.902 "name": null, 00:11:47.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.902 "is_configured": false, 00:11:47.902 "data_offset": 2048, 00:11:47.902 "data_size": 63488 00:11:47.902 }, 00:11:47.902 { 00:11:47.902 "name": null, 00:11:47.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.902 "is_configured": false, 00:11:47.902 "data_offset": 2048, 00:11:47.902 "data_size": 63488 00:11:47.902 } 00:11:47.902 ] 00:11:47.902 }' 00:11:47.902 15:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.902 15:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.162 [2024-12-06 15:38:31.286705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:48.162 [2024-12-06 15:38:31.286797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.162 [2024-12-06 15:38:31.286835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:48.162 [2024-12-06 15:38:31.286848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.162 [2024-12-06 15:38:31.287399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.162 [2024-12-06 15:38:31.287419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:48.162 [2024-12-06 15:38:31.287551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:48.162 [2024-12-06 15:38:31.287590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:48.162 pt2 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.162 [2024-12-06 15:38:31.298705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.162 "name": "raid_bdev1", 00:11:48.162 "uuid": "2d8073cc-54b5-4dfc-818a-7321fb313713", 00:11:48.162 "strip_size_kb": 64, 00:11:48.162 "state": "configuring", 00:11:48.162 "raid_level": "raid0", 00:11:48.162 "superblock": true, 00:11:48.162 "num_base_bdevs": 3, 00:11:48.162 "num_base_bdevs_discovered": 1, 00:11:48.162 "num_base_bdevs_operational": 3, 00:11:48.162 "base_bdevs_list": [ 00:11:48.162 { 00:11:48.162 "name": "pt1", 00:11:48.162 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.162 "is_configured": true, 00:11:48.162 "data_offset": 2048, 00:11:48.162 "data_size": 63488 00:11:48.162 }, 00:11:48.162 { 00:11:48.162 "name": null, 00:11:48.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.162 "is_configured": false, 00:11:48.162 "data_offset": 0, 00:11:48.162 "data_size": 63488 00:11:48.162 }, 00:11:48.162 { 00:11:48.162 "name": null, 00:11:48.162 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.162 "is_configured": false, 00:11:48.162 "data_offset": 2048, 00:11:48.162 "data_size": 63488 00:11:48.162 } 00:11:48.162 ] 00:11:48.162 }' 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.162 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.731 [2024-12-06 15:38:31.746388] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:48.731 [2024-12-06 15:38:31.746673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.731 [2024-12-06 15:38:31.746714] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:48.731 [2024-12-06 15:38:31.746731] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.731 [2024-12-06 15:38:31.747347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.731 [2024-12-06 15:38:31.747383] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:48.731 [2024-12-06 15:38:31.747497] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:48.731 [2024-12-06 15:38:31.747546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:48.731 pt2 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.731 [2024-12-06 15:38:31.758355] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:48.731 [2024-12-06 15:38:31.758581] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.731 [2024-12-06 15:38:31.758616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:48.731 [2024-12-06 15:38:31.758633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.731 [2024-12-06 15:38:31.759183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.731 [2024-12-06 15:38:31.759219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:48.731 [2024-12-06 15:38:31.759320] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:48.731 [2024-12-06 15:38:31.759353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:48.731 [2024-12-06 15:38:31.759493] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:48.731 [2024-12-06 15:38:31.759527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:48.731 [2024-12-06 15:38:31.759836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:48.731 [2024-12-06 15:38:31.760012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:48.731 [2024-12-06 15:38:31.760021] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:48.731 [2024-12-06 15:38:31.760186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.731 pt3 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.731 "name": "raid_bdev1", 00:11:48.731 "uuid": "2d8073cc-54b5-4dfc-818a-7321fb313713", 00:11:48.731 "strip_size_kb": 64, 00:11:48.731 "state": "online", 00:11:48.731 "raid_level": "raid0", 00:11:48.731 "superblock": true, 00:11:48.731 "num_base_bdevs": 3, 00:11:48.731 "num_base_bdevs_discovered": 3, 00:11:48.731 "num_base_bdevs_operational": 3, 00:11:48.731 "base_bdevs_list": [ 00:11:48.731 { 00:11:48.731 "name": "pt1", 00:11:48.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.731 "is_configured": true, 00:11:48.731 "data_offset": 2048, 00:11:48.731 "data_size": 63488 00:11:48.731 }, 00:11:48.731 { 00:11:48.731 "name": "pt2", 00:11:48.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.731 "is_configured": true, 00:11:48.731 "data_offset": 2048, 00:11:48.731 "data_size": 63488 00:11:48.731 }, 00:11:48.731 { 00:11:48.731 "name": "pt3", 00:11:48.731 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.731 "is_configured": true, 00:11:48.731 "data_offset": 2048, 00:11:48.731 "data_size": 63488 00:11:48.731 } 00:11:48.731 ] 00:11:48.731 }' 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.731 15:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.990 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:48.990 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:48.990 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.990 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.990 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.990 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.990 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.990 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.990 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.990 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.990 [2024-12-06 15:38:32.186624] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.991 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.991 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.991 "name": "raid_bdev1", 00:11:48.991 "aliases": [ 00:11:48.991 "2d8073cc-54b5-4dfc-818a-7321fb313713" 00:11:48.991 ], 00:11:48.991 "product_name": "Raid Volume", 00:11:48.991 "block_size": 512, 00:11:48.991 "num_blocks": 190464, 00:11:48.991 "uuid": "2d8073cc-54b5-4dfc-818a-7321fb313713", 00:11:48.991 "assigned_rate_limits": { 00:11:48.991 "rw_ios_per_sec": 0, 00:11:48.991 "rw_mbytes_per_sec": 0, 00:11:48.991 "r_mbytes_per_sec": 0, 00:11:48.991 "w_mbytes_per_sec": 0 00:11:48.991 }, 00:11:48.991 "claimed": false, 00:11:48.991 "zoned": false, 00:11:48.991 "supported_io_types": { 00:11:48.991 "read": true, 00:11:48.991 "write": true, 00:11:48.991 "unmap": true, 00:11:48.991 "flush": true, 00:11:48.991 "reset": true, 00:11:48.991 "nvme_admin": false, 00:11:48.991 "nvme_io": false, 00:11:48.991 "nvme_io_md": false, 00:11:48.991 "write_zeroes": true, 00:11:48.991 "zcopy": false, 00:11:48.991 "get_zone_info": false, 00:11:48.991 "zone_management": false, 00:11:48.991 "zone_append": false, 00:11:48.991 "compare": false, 00:11:48.991 "compare_and_write": false, 00:11:48.991 "abort": false, 00:11:48.991 "seek_hole": false, 00:11:48.991 "seek_data": false, 00:11:48.991 "copy": false, 00:11:48.991 "nvme_iov_md": false 00:11:48.991 }, 00:11:48.991 "memory_domains": [ 00:11:48.991 { 00:11:48.991 "dma_device_id": "system", 00:11:48.991 "dma_device_type": 1 00:11:48.991 }, 00:11:48.991 { 00:11:48.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.991 "dma_device_type": 2 00:11:48.991 }, 00:11:48.991 { 00:11:48.991 "dma_device_id": "system", 00:11:48.991 "dma_device_type": 1 00:11:48.991 }, 00:11:48.991 { 00:11:48.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.991 "dma_device_type": 2 00:11:48.991 }, 00:11:48.991 { 00:11:48.991 "dma_device_id": "system", 00:11:48.991 "dma_device_type": 1 00:11:48.991 }, 00:11:48.991 { 00:11:48.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.991 "dma_device_type": 2 00:11:48.991 } 00:11:48.991 ], 00:11:48.991 "driver_specific": { 00:11:48.991 "raid": { 00:11:48.991 "uuid": "2d8073cc-54b5-4dfc-818a-7321fb313713", 00:11:48.991 "strip_size_kb": 64, 00:11:48.991 "state": "online", 00:11:48.991 "raid_level": "raid0", 00:11:48.991 "superblock": true, 00:11:48.991 "num_base_bdevs": 3, 00:11:48.991 "num_base_bdevs_discovered": 3, 00:11:48.991 "num_base_bdevs_operational": 3, 00:11:48.991 "base_bdevs_list": [ 00:11:48.991 { 00:11:48.991 "name": "pt1", 00:11:48.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.991 "is_configured": true, 00:11:48.991 "data_offset": 2048, 00:11:48.991 "data_size": 63488 00:11:48.991 }, 00:11:48.991 { 00:11:48.991 "name": "pt2", 00:11:48.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.991 "is_configured": true, 00:11:48.991 "data_offset": 2048, 00:11:48.991 "data_size": 63488 00:11:48.991 }, 00:11:48.991 { 00:11:48.991 "name": "pt3", 00:11:48.991 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.991 "is_configured": true, 00:11:48.991 "data_offset": 2048, 00:11:48.991 "data_size": 63488 00:11:48.991 } 00:11:48.991 ] 00:11:48.991 } 00:11:48.991 } 00:11:48.991 }' 00:11:48.991 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.991 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:48.991 pt2 00:11:48.991 pt3' 00:11:48.991 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.252 [2024-12-06 15:38:32.458427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2d8073cc-54b5-4dfc-818a-7321fb313713 '!=' 2d8073cc-54b5-4dfc-818a-7321fb313713 ']' 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65093 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65093 ']' 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65093 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65093 00:11:49.252 killing process with pid 65093 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65093' 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65093 00:11:49.252 [2024-12-06 15:38:32.537263] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.252 [2024-12-06 15:38:32.537389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.252 15:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65093 00:11:49.252 [2024-12-06 15:38:32.537461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.252 [2024-12-06 15:38:32.537476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:49.821 [2024-12-06 15:38:32.872480] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:51.198 ************************************ 00:11:51.198 END TEST raid_superblock_test 00:11:51.198 ************************************ 00:11:51.198 15:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:51.198 00:11:51.198 real 0m5.392s 00:11:51.198 user 0m7.524s 00:11:51.198 sys 0m1.132s 00:11:51.198 15:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.198 15:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.198 15:38:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:11:51.198 15:38:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:51.198 15:38:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.198 15:38:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:51.198 ************************************ 00:11:51.198 START TEST raid_read_error_test 00:11:51.198 ************************************ 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:51.198 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:51.199 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:51.199 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:51.199 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:51.199 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:51.199 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.N1aYiT5FdV 00:11:51.199 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65346 00:11:51.199 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:51.199 15:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65346 00:11:51.199 15:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65346 ']' 00:11:51.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.199 15:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.199 15:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.199 15:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.199 15:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.199 15:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.199 [2024-12-06 15:38:34.320458] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:11:51.199 [2024-12-06 15:38:34.321313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65346 ] 00:11:51.457 [2024-12-06 15:38:34.523651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.457 [2024-12-06 15:38:34.669466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.716 [2024-12-06 15:38:34.917975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.716 [2024-12-06 15:38:34.918053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.974 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.974 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:51.974 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.975 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.975 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.975 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.975 BaseBdev1_malloc 00:11:51.975 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.975 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:51.975 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.975 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 true 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 [2024-12-06 15:38:35.280870] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:52.234 [2024-12-06 15:38:35.280942] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.234 [2024-12-06 15:38:35.280984] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:52.234 [2024-12-06 15:38:35.281000] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.234 [2024-12-06 15:38:35.283713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.234 [2024-12-06 15:38:35.283758] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:52.234 BaseBdev1 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 BaseBdev2_malloc 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 true 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 [2024-12-06 15:38:35.356527] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:52.234 [2024-12-06 15:38:35.356728] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.234 [2024-12-06 15:38:35.356759] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:52.234 [2024-12-06 15:38:35.356775] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.234 [2024-12-06 15:38:35.359570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.234 [2024-12-06 15:38:35.359613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:52.234 BaseBdev2 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 BaseBdev3_malloc 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 true 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 [2024-12-06 15:38:35.443147] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:52.234 [2024-12-06 15:38:35.443206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.234 [2024-12-06 15:38:35.443227] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:52.234 [2024-12-06 15:38:35.443242] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.234 [2024-12-06 15:38:35.445910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.234 [2024-12-06 15:38:35.445952] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:52.234 BaseBdev3 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 [2024-12-06 15:38:35.455228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.234 [2024-12-06 15:38:35.457590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.234 [2024-12-06 15:38:35.457665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.234 [2024-12-06 15:38:35.457866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:52.234 [2024-12-06 15:38:35.457883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:52.234 [2024-12-06 15:38:35.458166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:52.234 [2024-12-06 15:38:35.458341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:52.234 [2024-12-06 15:38:35.458358] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:52.234 [2024-12-06 15:38:35.458521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.234 "name": "raid_bdev1", 00:11:52.234 "uuid": "beec08d8-5539-4f16-8f4c-f63725b65192", 00:11:52.234 "strip_size_kb": 64, 00:11:52.234 "state": "online", 00:11:52.234 "raid_level": "raid0", 00:11:52.234 "superblock": true, 00:11:52.234 "num_base_bdevs": 3, 00:11:52.234 "num_base_bdevs_discovered": 3, 00:11:52.234 "num_base_bdevs_operational": 3, 00:11:52.234 "base_bdevs_list": [ 00:11:52.234 { 00:11:52.234 "name": "BaseBdev1", 00:11:52.234 "uuid": "6adbf923-611b-5dcb-8bb2-393e831921d0", 00:11:52.234 "is_configured": true, 00:11:52.234 "data_offset": 2048, 00:11:52.234 "data_size": 63488 00:11:52.234 }, 00:11:52.234 { 00:11:52.234 "name": "BaseBdev2", 00:11:52.234 "uuid": "466c184d-453d-5c25-a0e9-c0df9ef28d3d", 00:11:52.234 "is_configured": true, 00:11:52.234 "data_offset": 2048, 00:11:52.234 "data_size": 63488 00:11:52.234 }, 00:11:52.234 { 00:11:52.234 "name": "BaseBdev3", 00:11:52.234 "uuid": "9a44e1d2-5c66-5f6b-8a83-4b32e1d98685", 00:11:52.235 "is_configured": true, 00:11:52.235 "data_offset": 2048, 00:11:52.235 "data_size": 63488 00:11:52.235 } 00:11:52.235 ] 00:11:52.235 }' 00:11:52.235 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.235 15:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.819 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:52.819 15:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:52.819 [2024-12-06 15:38:35.992096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.755 "name": "raid_bdev1", 00:11:53.755 "uuid": "beec08d8-5539-4f16-8f4c-f63725b65192", 00:11:53.755 "strip_size_kb": 64, 00:11:53.755 "state": "online", 00:11:53.755 "raid_level": "raid0", 00:11:53.755 "superblock": true, 00:11:53.755 "num_base_bdevs": 3, 00:11:53.755 "num_base_bdevs_discovered": 3, 00:11:53.755 "num_base_bdevs_operational": 3, 00:11:53.755 "base_bdevs_list": [ 00:11:53.755 { 00:11:53.755 "name": "BaseBdev1", 00:11:53.755 "uuid": "6adbf923-611b-5dcb-8bb2-393e831921d0", 00:11:53.755 "is_configured": true, 00:11:53.755 "data_offset": 2048, 00:11:53.755 "data_size": 63488 00:11:53.755 }, 00:11:53.755 { 00:11:53.755 "name": "BaseBdev2", 00:11:53.755 "uuid": "466c184d-453d-5c25-a0e9-c0df9ef28d3d", 00:11:53.755 "is_configured": true, 00:11:53.755 "data_offset": 2048, 00:11:53.755 "data_size": 63488 00:11:53.755 }, 00:11:53.755 { 00:11:53.755 "name": "BaseBdev3", 00:11:53.755 "uuid": "9a44e1d2-5c66-5f6b-8a83-4b32e1d98685", 00:11:53.755 "is_configured": true, 00:11:53.755 "data_offset": 2048, 00:11:53.755 "data_size": 63488 00:11:53.755 } 00:11:53.755 ] 00:11:53.755 }' 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.755 15:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.323 15:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:54.323 15:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.323 15:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.323 [2024-12-06 15:38:37.325866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.323 [2024-12-06 15:38:37.326053] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.323 [2024-12-06 15:38:37.329233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.323 [2024-12-06 15:38:37.329385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.323 [2024-12-06 15:38:37.329475] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.323 [2024-12-06 15:38:37.329597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:54.323 { 00:11:54.323 "results": [ 00:11:54.323 { 00:11:54.323 "job": "raid_bdev1", 00:11:54.323 "core_mask": "0x1", 00:11:54.323 "workload": "randrw", 00:11:54.323 "percentage": 50, 00:11:54.323 "status": "finished", 00:11:54.323 "queue_depth": 1, 00:11:54.323 "io_size": 131072, 00:11:54.323 "runtime": 1.333598, 00:11:54.323 "iops": 13090.151604906427, 00:11:54.323 "mibps": 1636.2689506133033, 00:11:54.323 "io_failed": 1, 00:11:54.323 "io_timeout": 0, 00:11:54.323 "avg_latency_us": 106.99052679960305, 00:11:54.323 "min_latency_us": 24.469076305220884, 00:11:54.323 "max_latency_us": 1552.8610441767069 00:11:54.323 } 00:11:54.323 ], 00:11:54.323 "core_count": 1 00:11:54.323 } 00:11:54.323 15:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.323 15:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65346 00:11:54.323 15:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65346 ']' 00:11:54.323 15:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65346 00:11:54.323 15:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:54.323 15:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.323 15:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65346 00:11:54.323 15:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.323 killing process with pid 65346 00:11:54.323 15:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.323 15:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65346' 00:11:54.323 15:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65346 00:11:54.323 [2024-12-06 15:38:37.372789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:54.323 15:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65346 00:11:54.583 [2024-12-06 15:38:37.629135] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.961 15:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.N1aYiT5FdV 00:11:55.961 15:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:55.961 15:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:55.961 15:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:55.961 15:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:55.961 15:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:55.961 15:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:55.961 15:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:55.961 00:11:55.961 real 0m4.761s 00:11:55.961 user 0m5.471s 00:11:55.961 sys 0m0.760s 00:11:55.961 15:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.961 15:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.961 ************************************ 00:11:55.961 END TEST raid_read_error_test 00:11:55.961 ************************************ 00:11:55.961 15:38:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:11:55.961 15:38:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:55.962 15:38:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.962 15:38:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.962 ************************************ 00:11:55.962 START TEST raid_write_error_test 00:11:55.962 ************************************ 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.p31WlC3tgB 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65497 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65497 00:11:55.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65497 ']' 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.962 15:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.962 [2024-12-06 15:38:39.154347] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:11:55.962 [2024-12-06 15:38:39.154498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65497 ] 00:11:56.221 [2024-12-06 15:38:39.341837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.221 [2024-12-06 15:38:39.480115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.481 [2024-12-06 15:38:39.712239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.481 [2024-12-06 15:38:39.712308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.740 15:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.740 15:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:56.740 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.740 15:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:56.740 15:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.740 15:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.999 BaseBdev1_malloc 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.999 true 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.999 [2024-12-06 15:38:40.058565] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:56.999 [2024-12-06 15:38:40.058758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.999 [2024-12-06 15:38:40.058791] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:56.999 [2024-12-06 15:38:40.058807] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.999 [2024-12-06 15:38:40.061605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.999 [2024-12-06 15:38:40.061649] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:56.999 BaseBdev1 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.999 BaseBdev2_malloc 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.999 true 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.999 [2024-12-06 15:38:40.133720] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:56.999 [2024-12-06 15:38:40.133783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.999 [2024-12-06 15:38:40.133803] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:56.999 [2024-12-06 15:38:40.133818] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.999 [2024-12-06 15:38:40.136488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.999 [2024-12-06 15:38:40.136545] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:56.999 BaseBdev2 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:56.999 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.000 BaseBdev3_malloc 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.000 true 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.000 [2024-12-06 15:38:40.218278] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:57.000 [2024-12-06 15:38:40.218333] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.000 [2024-12-06 15:38:40.218354] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:57.000 [2024-12-06 15:38:40.218369] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.000 [2024-12-06 15:38:40.221035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.000 [2024-12-06 15:38:40.221095] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:57.000 BaseBdev3 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.000 [2024-12-06 15:38:40.230360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.000 [2024-12-06 15:38:40.232735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.000 [2024-12-06 15:38:40.232811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.000 [2024-12-06 15:38:40.233025] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:57.000 [2024-12-06 15:38:40.233042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:57.000 [2024-12-06 15:38:40.233308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:57.000 [2024-12-06 15:38:40.233482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:57.000 [2024-12-06 15:38:40.233516] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:57.000 [2024-12-06 15:38:40.233665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.000 "name": "raid_bdev1", 00:11:57.000 "uuid": "831af9ee-c1e9-4ea6-b0ba-ec85f08d8252", 00:11:57.000 "strip_size_kb": 64, 00:11:57.000 "state": "online", 00:11:57.000 "raid_level": "raid0", 00:11:57.000 "superblock": true, 00:11:57.000 "num_base_bdevs": 3, 00:11:57.000 "num_base_bdevs_discovered": 3, 00:11:57.000 "num_base_bdevs_operational": 3, 00:11:57.000 "base_bdevs_list": [ 00:11:57.000 { 00:11:57.000 "name": "BaseBdev1", 00:11:57.000 "uuid": "570d5aa3-89e2-555a-98ac-c740ad41634f", 00:11:57.000 "is_configured": true, 00:11:57.000 "data_offset": 2048, 00:11:57.000 "data_size": 63488 00:11:57.000 }, 00:11:57.000 { 00:11:57.000 "name": "BaseBdev2", 00:11:57.000 "uuid": "30169b6f-dae7-529d-9156-f74ff63f2844", 00:11:57.000 "is_configured": true, 00:11:57.000 "data_offset": 2048, 00:11:57.000 "data_size": 63488 00:11:57.000 }, 00:11:57.000 { 00:11:57.000 "name": "BaseBdev3", 00:11:57.000 "uuid": "b9e9d654-18e2-52b9-88b6-659184c586b4", 00:11:57.000 "is_configured": true, 00:11:57.000 "data_offset": 2048, 00:11:57.000 "data_size": 63488 00:11:57.000 } 00:11:57.000 ] 00:11:57.000 }' 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.000 15:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.568 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:57.568 15:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:57.568 [2024-12-06 15:38:40.723351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.506 "name": "raid_bdev1", 00:11:58.506 "uuid": "831af9ee-c1e9-4ea6-b0ba-ec85f08d8252", 00:11:58.506 "strip_size_kb": 64, 00:11:58.506 "state": "online", 00:11:58.506 "raid_level": "raid0", 00:11:58.506 "superblock": true, 00:11:58.506 "num_base_bdevs": 3, 00:11:58.506 "num_base_bdevs_discovered": 3, 00:11:58.506 "num_base_bdevs_operational": 3, 00:11:58.506 "base_bdevs_list": [ 00:11:58.506 { 00:11:58.506 "name": "BaseBdev1", 00:11:58.506 "uuid": "570d5aa3-89e2-555a-98ac-c740ad41634f", 00:11:58.506 "is_configured": true, 00:11:58.506 "data_offset": 2048, 00:11:58.506 "data_size": 63488 00:11:58.506 }, 00:11:58.506 { 00:11:58.506 "name": "BaseBdev2", 00:11:58.506 "uuid": "30169b6f-dae7-529d-9156-f74ff63f2844", 00:11:58.506 "is_configured": true, 00:11:58.506 "data_offset": 2048, 00:11:58.506 "data_size": 63488 00:11:58.506 }, 00:11:58.506 { 00:11:58.506 "name": "BaseBdev3", 00:11:58.506 "uuid": "b9e9d654-18e2-52b9-88b6-659184c586b4", 00:11:58.506 "is_configured": true, 00:11:58.506 "data_offset": 2048, 00:11:58.506 "data_size": 63488 00:11:58.506 } 00:11:58.506 ] 00:11:58.506 }' 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.506 15:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.766 15:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.766 15:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.766 15:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.766 [2024-12-06 15:38:42.052494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.766 [2024-12-06 15:38:42.052544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.766 [2024-12-06 15:38:42.055307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.766 [2024-12-06 15:38:42.055531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.766 [2024-12-06 15:38:42.055601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.766 [2024-12-06 15:38:42.055614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:58.766 { 00:11:58.766 "results": [ 00:11:58.766 { 00:11:58.766 "job": "raid_bdev1", 00:11:58.766 "core_mask": "0x1", 00:11:58.766 "workload": "randrw", 00:11:58.766 "percentage": 50, 00:11:58.766 "status": "finished", 00:11:58.766 "queue_depth": 1, 00:11:58.766 "io_size": 131072, 00:11:58.766 "runtime": 1.328992, 00:11:58.766 "iops": 14006.856324191567, 00:11:58.766 "mibps": 1750.857040523946, 00:11:58.766 "io_failed": 1, 00:11:58.766 "io_timeout": 0, 00:11:58.766 "avg_latency_us": 99.87670907092055, 00:11:58.766 "min_latency_us": 23.646586345381525, 00:11:58.766 "max_latency_us": 1434.4224899598394 00:11:58.766 } 00:11:58.766 ], 00:11:58.766 "core_count": 1 00:11:58.766 } 00:11:58.766 15:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.766 15:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65497 00:11:58.766 15:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65497 ']' 00:11:58.766 15:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65497 00:11:59.025 15:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:59.025 15:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.025 15:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65497 00:11:59.025 killing process with pid 65497 00:11:59.025 15:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.025 15:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.025 15:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65497' 00:11:59.025 15:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65497 00:11:59.025 [2024-12-06 15:38:42.097067] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:59.025 15:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65497 00:11:59.284 [2024-12-06 15:38:42.352659] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.667 15:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.p31WlC3tgB 00:12:00.667 15:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:00.667 15:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:00.667 ************************************ 00:12:00.667 END TEST raid_write_error_test 00:12:00.667 ************************************ 00:12:00.667 15:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:12:00.667 15:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:00.667 15:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.667 15:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:00.667 15:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:12:00.667 00:12:00.667 real 0m4.654s 00:12:00.667 user 0m5.266s 00:12:00.667 sys 0m0.745s 00:12:00.667 15:38:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.667 15:38:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.667 15:38:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:00.667 15:38:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:12:00.667 15:38:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:00.667 15:38:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.667 15:38:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:00.667 ************************************ 00:12:00.667 START TEST raid_state_function_test 00:12:00.667 ************************************ 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65636 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65636' 00:12:00.668 Process raid pid: 65636 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65636 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65636 ']' 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.668 15:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.668 [2024-12-06 15:38:43.881262] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:12:00.668 [2024-12-06 15:38:43.881422] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.927 [2024-12-06 15:38:44.070125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.927 [2024-12-06 15:38:44.217950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.186 [2024-12-06 15:38:44.468024] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.186 [2024-12-06 15:38:44.468264] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.453 [2024-12-06 15:38:44.727858] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.453 [2024-12-06 15:38:44.727935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.453 [2024-12-06 15:38:44.727948] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.453 [2024-12-06 15:38:44.727963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.453 [2024-12-06 15:38:44.727970] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.453 [2024-12-06 15:38:44.727983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.453 15:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.809 15:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.809 15:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.809 "name": "Existed_Raid", 00:12:01.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.809 "strip_size_kb": 64, 00:12:01.809 "state": "configuring", 00:12:01.809 "raid_level": "concat", 00:12:01.809 "superblock": false, 00:12:01.809 "num_base_bdevs": 3, 00:12:01.809 "num_base_bdevs_discovered": 0, 00:12:01.809 "num_base_bdevs_operational": 3, 00:12:01.809 "base_bdevs_list": [ 00:12:01.809 { 00:12:01.809 "name": "BaseBdev1", 00:12:01.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.809 "is_configured": false, 00:12:01.809 "data_offset": 0, 00:12:01.809 "data_size": 0 00:12:01.809 }, 00:12:01.809 { 00:12:01.809 "name": "BaseBdev2", 00:12:01.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.809 "is_configured": false, 00:12:01.809 "data_offset": 0, 00:12:01.809 "data_size": 0 00:12:01.809 }, 00:12:01.809 { 00:12:01.809 "name": "BaseBdev3", 00:12:01.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.809 "is_configured": false, 00:12:01.809 "data_offset": 0, 00:12:01.809 "data_size": 0 00:12:01.809 } 00:12:01.809 ] 00:12:01.809 }' 00:12:01.809 15:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.809 15:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.067 [2024-12-06 15:38:45.167446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.067 [2024-12-06 15:38:45.167497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.067 [2024-12-06 15:38:45.179440] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:02.067 [2024-12-06 15:38:45.179528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:02.067 [2024-12-06 15:38:45.179541] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.067 [2024-12-06 15:38:45.179555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.067 [2024-12-06 15:38:45.179562] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.067 [2024-12-06 15:38:45.179575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.067 [2024-12-06 15:38:45.236073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.067 BaseBdev1 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.067 [ 00:12:02.067 { 00:12:02.067 "name": "BaseBdev1", 00:12:02.067 "aliases": [ 00:12:02.067 "bf97933f-921c-419b-ac2f-9c7e2a171e8e" 00:12:02.067 ], 00:12:02.067 "product_name": "Malloc disk", 00:12:02.067 "block_size": 512, 00:12:02.067 "num_blocks": 65536, 00:12:02.067 "uuid": "bf97933f-921c-419b-ac2f-9c7e2a171e8e", 00:12:02.067 "assigned_rate_limits": { 00:12:02.067 "rw_ios_per_sec": 0, 00:12:02.067 "rw_mbytes_per_sec": 0, 00:12:02.067 "r_mbytes_per_sec": 0, 00:12:02.067 "w_mbytes_per_sec": 0 00:12:02.067 }, 00:12:02.067 "claimed": true, 00:12:02.067 "claim_type": "exclusive_write", 00:12:02.067 "zoned": false, 00:12:02.067 "supported_io_types": { 00:12:02.067 "read": true, 00:12:02.067 "write": true, 00:12:02.067 "unmap": true, 00:12:02.067 "flush": true, 00:12:02.067 "reset": true, 00:12:02.067 "nvme_admin": false, 00:12:02.067 "nvme_io": false, 00:12:02.067 "nvme_io_md": false, 00:12:02.067 "write_zeroes": true, 00:12:02.067 "zcopy": true, 00:12:02.067 "get_zone_info": false, 00:12:02.067 "zone_management": false, 00:12:02.067 "zone_append": false, 00:12:02.067 "compare": false, 00:12:02.067 "compare_and_write": false, 00:12:02.067 "abort": true, 00:12:02.067 "seek_hole": false, 00:12:02.067 "seek_data": false, 00:12:02.067 "copy": true, 00:12:02.067 "nvme_iov_md": false 00:12:02.067 }, 00:12:02.067 "memory_domains": [ 00:12:02.067 { 00:12:02.067 "dma_device_id": "system", 00:12:02.067 "dma_device_type": 1 00:12:02.067 }, 00:12:02.067 { 00:12:02.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.067 "dma_device_type": 2 00:12:02.067 } 00:12:02.067 ], 00:12:02.067 "driver_specific": {} 00:12:02.067 } 00:12:02.067 ] 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.067 "name": "Existed_Raid", 00:12:02.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.067 "strip_size_kb": 64, 00:12:02.067 "state": "configuring", 00:12:02.067 "raid_level": "concat", 00:12:02.067 "superblock": false, 00:12:02.067 "num_base_bdevs": 3, 00:12:02.067 "num_base_bdevs_discovered": 1, 00:12:02.067 "num_base_bdevs_operational": 3, 00:12:02.067 "base_bdevs_list": [ 00:12:02.067 { 00:12:02.067 "name": "BaseBdev1", 00:12:02.067 "uuid": "bf97933f-921c-419b-ac2f-9c7e2a171e8e", 00:12:02.067 "is_configured": true, 00:12:02.067 "data_offset": 0, 00:12:02.067 "data_size": 65536 00:12:02.067 }, 00:12:02.067 { 00:12:02.067 "name": "BaseBdev2", 00:12:02.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.067 "is_configured": false, 00:12:02.067 "data_offset": 0, 00:12:02.067 "data_size": 0 00:12:02.067 }, 00:12:02.067 { 00:12:02.067 "name": "BaseBdev3", 00:12:02.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.067 "is_configured": false, 00:12:02.067 "data_offset": 0, 00:12:02.067 "data_size": 0 00:12:02.067 } 00:12:02.067 ] 00:12:02.067 }' 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.067 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.633 [2024-12-06 15:38:45.723687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.633 [2024-12-06 15:38:45.723903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.633 [2024-12-06 15:38:45.735759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.633 [2024-12-06 15:38:45.738200] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.633 [2024-12-06 15:38:45.738255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.633 [2024-12-06 15:38:45.738269] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.633 [2024-12-06 15:38:45.738282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.633 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.634 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.634 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.634 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.634 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.634 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.634 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.634 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.634 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.634 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.634 "name": "Existed_Raid", 00:12:02.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.634 "strip_size_kb": 64, 00:12:02.634 "state": "configuring", 00:12:02.634 "raid_level": "concat", 00:12:02.634 "superblock": false, 00:12:02.634 "num_base_bdevs": 3, 00:12:02.634 "num_base_bdevs_discovered": 1, 00:12:02.634 "num_base_bdevs_operational": 3, 00:12:02.634 "base_bdevs_list": [ 00:12:02.634 { 00:12:02.634 "name": "BaseBdev1", 00:12:02.634 "uuid": "bf97933f-921c-419b-ac2f-9c7e2a171e8e", 00:12:02.634 "is_configured": true, 00:12:02.634 "data_offset": 0, 00:12:02.634 "data_size": 65536 00:12:02.634 }, 00:12:02.634 { 00:12:02.634 "name": "BaseBdev2", 00:12:02.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.634 "is_configured": false, 00:12:02.634 "data_offset": 0, 00:12:02.634 "data_size": 0 00:12:02.634 }, 00:12:02.634 { 00:12:02.634 "name": "BaseBdev3", 00:12:02.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.634 "is_configured": false, 00:12:02.634 "data_offset": 0, 00:12:02.634 "data_size": 0 00:12:02.634 } 00:12:02.634 ] 00:12:02.634 }' 00:12:02.634 15:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.634 15:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.892 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:02.892 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.892 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.892 [2024-12-06 15:38:46.166142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.892 BaseBdev2 00:12:02.892 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.892 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:02.892 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:02.892 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.892 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:02.893 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.893 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.893 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.893 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.893 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.893 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.893 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:02.893 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.893 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.152 [ 00:12:03.152 { 00:12:03.152 "name": "BaseBdev2", 00:12:03.152 "aliases": [ 00:12:03.152 "2fb9b1a5-32fc-461c-8d6d-e1fff2ed09a7" 00:12:03.152 ], 00:12:03.152 "product_name": "Malloc disk", 00:12:03.152 "block_size": 512, 00:12:03.152 "num_blocks": 65536, 00:12:03.152 "uuid": "2fb9b1a5-32fc-461c-8d6d-e1fff2ed09a7", 00:12:03.152 "assigned_rate_limits": { 00:12:03.152 "rw_ios_per_sec": 0, 00:12:03.152 "rw_mbytes_per_sec": 0, 00:12:03.152 "r_mbytes_per_sec": 0, 00:12:03.152 "w_mbytes_per_sec": 0 00:12:03.152 }, 00:12:03.152 "claimed": true, 00:12:03.152 "claim_type": "exclusive_write", 00:12:03.152 "zoned": false, 00:12:03.152 "supported_io_types": { 00:12:03.152 "read": true, 00:12:03.152 "write": true, 00:12:03.152 "unmap": true, 00:12:03.152 "flush": true, 00:12:03.152 "reset": true, 00:12:03.152 "nvme_admin": false, 00:12:03.152 "nvme_io": false, 00:12:03.152 "nvme_io_md": false, 00:12:03.152 "write_zeroes": true, 00:12:03.152 "zcopy": true, 00:12:03.152 "get_zone_info": false, 00:12:03.152 "zone_management": false, 00:12:03.152 "zone_append": false, 00:12:03.152 "compare": false, 00:12:03.152 "compare_and_write": false, 00:12:03.152 "abort": true, 00:12:03.152 "seek_hole": false, 00:12:03.152 "seek_data": false, 00:12:03.152 "copy": true, 00:12:03.152 "nvme_iov_md": false 00:12:03.152 }, 00:12:03.152 "memory_domains": [ 00:12:03.152 { 00:12:03.152 "dma_device_id": "system", 00:12:03.152 "dma_device_type": 1 00:12:03.152 }, 00:12:03.152 { 00:12:03.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.152 "dma_device_type": 2 00:12:03.152 } 00:12:03.152 ], 00:12:03.152 "driver_specific": {} 00:12:03.152 } 00:12:03.152 ] 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.152 "name": "Existed_Raid", 00:12:03.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.152 "strip_size_kb": 64, 00:12:03.152 "state": "configuring", 00:12:03.152 "raid_level": "concat", 00:12:03.152 "superblock": false, 00:12:03.152 "num_base_bdevs": 3, 00:12:03.152 "num_base_bdevs_discovered": 2, 00:12:03.152 "num_base_bdevs_operational": 3, 00:12:03.152 "base_bdevs_list": [ 00:12:03.152 { 00:12:03.152 "name": "BaseBdev1", 00:12:03.152 "uuid": "bf97933f-921c-419b-ac2f-9c7e2a171e8e", 00:12:03.152 "is_configured": true, 00:12:03.152 "data_offset": 0, 00:12:03.152 "data_size": 65536 00:12:03.152 }, 00:12:03.152 { 00:12:03.152 "name": "BaseBdev2", 00:12:03.152 "uuid": "2fb9b1a5-32fc-461c-8d6d-e1fff2ed09a7", 00:12:03.152 "is_configured": true, 00:12:03.152 "data_offset": 0, 00:12:03.152 "data_size": 65536 00:12:03.152 }, 00:12:03.152 { 00:12:03.152 "name": "BaseBdev3", 00:12:03.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.152 "is_configured": false, 00:12:03.152 "data_offset": 0, 00:12:03.152 "data_size": 0 00:12:03.152 } 00:12:03.152 ] 00:12:03.152 }' 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.152 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.412 [2024-12-06 15:38:46.660485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.412 [2024-12-06 15:38:46.660581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:03.412 [2024-12-06 15:38:46.660599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:03.412 [2024-12-06 15:38:46.661179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:03.412 [2024-12-06 15:38:46.661408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:03.412 [2024-12-06 15:38:46.661420] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:03.412 [2024-12-06 15:38:46.661769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.412 BaseBdev3 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.412 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.412 [ 00:12:03.412 { 00:12:03.412 "name": "BaseBdev3", 00:12:03.412 "aliases": [ 00:12:03.412 "cf288a20-7d42-4ab5-862b-eaccc663d3ad" 00:12:03.412 ], 00:12:03.412 "product_name": "Malloc disk", 00:12:03.412 "block_size": 512, 00:12:03.412 "num_blocks": 65536, 00:12:03.412 "uuid": "cf288a20-7d42-4ab5-862b-eaccc663d3ad", 00:12:03.412 "assigned_rate_limits": { 00:12:03.412 "rw_ios_per_sec": 0, 00:12:03.412 "rw_mbytes_per_sec": 0, 00:12:03.412 "r_mbytes_per_sec": 0, 00:12:03.412 "w_mbytes_per_sec": 0 00:12:03.412 }, 00:12:03.412 "claimed": true, 00:12:03.412 "claim_type": "exclusive_write", 00:12:03.412 "zoned": false, 00:12:03.412 "supported_io_types": { 00:12:03.412 "read": true, 00:12:03.412 "write": true, 00:12:03.412 "unmap": true, 00:12:03.412 "flush": true, 00:12:03.412 "reset": true, 00:12:03.412 "nvme_admin": false, 00:12:03.412 "nvme_io": false, 00:12:03.412 "nvme_io_md": false, 00:12:03.412 "write_zeroes": true, 00:12:03.412 "zcopy": true, 00:12:03.412 "get_zone_info": false, 00:12:03.412 "zone_management": false, 00:12:03.412 "zone_append": false, 00:12:03.671 "compare": false, 00:12:03.671 "compare_and_write": false, 00:12:03.671 "abort": true, 00:12:03.671 "seek_hole": false, 00:12:03.671 "seek_data": false, 00:12:03.671 "copy": true, 00:12:03.671 "nvme_iov_md": false 00:12:03.671 }, 00:12:03.671 "memory_domains": [ 00:12:03.671 { 00:12:03.671 "dma_device_id": "system", 00:12:03.671 "dma_device_type": 1 00:12:03.671 }, 00:12:03.671 { 00:12:03.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.671 "dma_device_type": 2 00:12:03.671 } 00:12:03.671 ], 00:12:03.671 "driver_specific": {} 00:12:03.671 } 00:12:03.671 ] 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.671 "name": "Existed_Raid", 00:12:03.671 "uuid": "6f15e902-ade8-4863-8c2e-d49042300365", 00:12:03.671 "strip_size_kb": 64, 00:12:03.671 "state": "online", 00:12:03.671 "raid_level": "concat", 00:12:03.671 "superblock": false, 00:12:03.671 "num_base_bdevs": 3, 00:12:03.671 "num_base_bdevs_discovered": 3, 00:12:03.671 "num_base_bdevs_operational": 3, 00:12:03.671 "base_bdevs_list": [ 00:12:03.671 { 00:12:03.671 "name": "BaseBdev1", 00:12:03.671 "uuid": "bf97933f-921c-419b-ac2f-9c7e2a171e8e", 00:12:03.671 "is_configured": true, 00:12:03.671 "data_offset": 0, 00:12:03.671 "data_size": 65536 00:12:03.671 }, 00:12:03.671 { 00:12:03.671 "name": "BaseBdev2", 00:12:03.671 "uuid": "2fb9b1a5-32fc-461c-8d6d-e1fff2ed09a7", 00:12:03.671 "is_configured": true, 00:12:03.671 "data_offset": 0, 00:12:03.671 "data_size": 65536 00:12:03.671 }, 00:12:03.671 { 00:12:03.671 "name": "BaseBdev3", 00:12:03.671 "uuid": "cf288a20-7d42-4ab5-862b-eaccc663d3ad", 00:12:03.671 "is_configured": true, 00:12:03.671 "data_offset": 0, 00:12:03.671 "data_size": 65536 00:12:03.671 } 00:12:03.671 ] 00:12:03.671 }' 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.671 15:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.929 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:03.929 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:03.929 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.929 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.929 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.929 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.929 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.929 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:03.929 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.929 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.929 [2024-12-06 15:38:47.140206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.929 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.929 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.929 "name": "Existed_Raid", 00:12:03.929 "aliases": [ 00:12:03.929 "6f15e902-ade8-4863-8c2e-d49042300365" 00:12:03.929 ], 00:12:03.929 "product_name": "Raid Volume", 00:12:03.929 "block_size": 512, 00:12:03.929 "num_blocks": 196608, 00:12:03.929 "uuid": "6f15e902-ade8-4863-8c2e-d49042300365", 00:12:03.929 "assigned_rate_limits": { 00:12:03.929 "rw_ios_per_sec": 0, 00:12:03.929 "rw_mbytes_per_sec": 0, 00:12:03.929 "r_mbytes_per_sec": 0, 00:12:03.929 "w_mbytes_per_sec": 0 00:12:03.929 }, 00:12:03.929 "claimed": false, 00:12:03.929 "zoned": false, 00:12:03.929 "supported_io_types": { 00:12:03.929 "read": true, 00:12:03.929 "write": true, 00:12:03.929 "unmap": true, 00:12:03.929 "flush": true, 00:12:03.929 "reset": true, 00:12:03.929 "nvme_admin": false, 00:12:03.929 "nvme_io": false, 00:12:03.929 "nvme_io_md": false, 00:12:03.929 "write_zeroes": true, 00:12:03.929 "zcopy": false, 00:12:03.929 "get_zone_info": false, 00:12:03.929 "zone_management": false, 00:12:03.929 "zone_append": false, 00:12:03.929 "compare": false, 00:12:03.929 "compare_and_write": false, 00:12:03.929 "abort": false, 00:12:03.929 "seek_hole": false, 00:12:03.929 "seek_data": false, 00:12:03.929 "copy": false, 00:12:03.929 "nvme_iov_md": false 00:12:03.929 }, 00:12:03.929 "memory_domains": [ 00:12:03.929 { 00:12:03.929 "dma_device_id": "system", 00:12:03.929 "dma_device_type": 1 00:12:03.929 }, 00:12:03.929 { 00:12:03.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.929 "dma_device_type": 2 00:12:03.929 }, 00:12:03.930 { 00:12:03.930 "dma_device_id": "system", 00:12:03.930 "dma_device_type": 1 00:12:03.930 }, 00:12:03.930 { 00:12:03.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.930 "dma_device_type": 2 00:12:03.930 }, 00:12:03.930 { 00:12:03.930 "dma_device_id": "system", 00:12:03.930 "dma_device_type": 1 00:12:03.930 }, 00:12:03.930 { 00:12:03.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.930 "dma_device_type": 2 00:12:03.930 } 00:12:03.930 ], 00:12:03.930 "driver_specific": { 00:12:03.930 "raid": { 00:12:03.930 "uuid": "6f15e902-ade8-4863-8c2e-d49042300365", 00:12:03.930 "strip_size_kb": 64, 00:12:03.930 "state": "online", 00:12:03.930 "raid_level": "concat", 00:12:03.930 "superblock": false, 00:12:03.930 "num_base_bdevs": 3, 00:12:03.930 "num_base_bdevs_discovered": 3, 00:12:03.930 "num_base_bdevs_operational": 3, 00:12:03.930 "base_bdevs_list": [ 00:12:03.930 { 00:12:03.930 "name": "BaseBdev1", 00:12:03.930 "uuid": "bf97933f-921c-419b-ac2f-9c7e2a171e8e", 00:12:03.930 "is_configured": true, 00:12:03.930 "data_offset": 0, 00:12:03.930 "data_size": 65536 00:12:03.930 }, 00:12:03.930 { 00:12:03.930 "name": "BaseBdev2", 00:12:03.930 "uuid": "2fb9b1a5-32fc-461c-8d6d-e1fff2ed09a7", 00:12:03.930 "is_configured": true, 00:12:03.930 "data_offset": 0, 00:12:03.930 "data_size": 65536 00:12:03.930 }, 00:12:03.930 { 00:12:03.930 "name": "BaseBdev3", 00:12:03.930 "uuid": "cf288a20-7d42-4ab5-862b-eaccc663d3ad", 00:12:03.930 "is_configured": true, 00:12:03.930 "data_offset": 0, 00:12:03.930 "data_size": 65536 00:12:03.930 } 00:12:03.930 ] 00:12:03.930 } 00:12:03.930 } 00:12:03.930 }' 00:12:03.930 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.930 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:03.930 BaseBdev2 00:12:03.930 BaseBdev3' 00:12:03.930 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.188 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:04.188 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.188 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:04.188 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.188 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.188 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.188 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.188 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.188 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.188 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.188 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:04.188 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.189 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.189 [2024-12-06 15:38:47.407705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.189 [2024-12-06 15:38:47.407739] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:04.189 [2024-12-06 15:38:47.407807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.448 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.449 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.449 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.449 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.449 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.449 "name": "Existed_Raid", 00:12:04.449 "uuid": "6f15e902-ade8-4863-8c2e-d49042300365", 00:12:04.449 "strip_size_kb": 64, 00:12:04.449 "state": "offline", 00:12:04.449 "raid_level": "concat", 00:12:04.449 "superblock": false, 00:12:04.449 "num_base_bdevs": 3, 00:12:04.449 "num_base_bdevs_discovered": 2, 00:12:04.449 "num_base_bdevs_operational": 2, 00:12:04.449 "base_bdevs_list": [ 00:12:04.449 { 00:12:04.449 "name": null, 00:12:04.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.449 "is_configured": false, 00:12:04.449 "data_offset": 0, 00:12:04.449 "data_size": 65536 00:12:04.449 }, 00:12:04.449 { 00:12:04.449 "name": "BaseBdev2", 00:12:04.449 "uuid": "2fb9b1a5-32fc-461c-8d6d-e1fff2ed09a7", 00:12:04.449 "is_configured": true, 00:12:04.449 "data_offset": 0, 00:12:04.449 "data_size": 65536 00:12:04.449 }, 00:12:04.449 { 00:12:04.449 "name": "BaseBdev3", 00:12:04.449 "uuid": "cf288a20-7d42-4ab5-862b-eaccc663d3ad", 00:12:04.449 "is_configured": true, 00:12:04.449 "data_offset": 0, 00:12:04.449 "data_size": 65536 00:12:04.449 } 00:12:04.449 ] 00:12:04.449 }' 00:12:04.449 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.449 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.708 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:04.708 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.708 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.708 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.708 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.708 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.708 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.708 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.708 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.708 15:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:04.708 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.708 15:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.708 [2024-12-06 15:38:47.952727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.967 [2024-12-06 15:38:48.112833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:04.967 [2024-12-06 15:38:48.112898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:04.967 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.968 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.968 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.968 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:04.968 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.968 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.227 BaseBdev2 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.227 [ 00:12:05.227 { 00:12:05.227 "name": "BaseBdev2", 00:12:05.227 "aliases": [ 00:12:05.227 "24acddff-cd75-47da-a1ea-c3b644ec1f3d" 00:12:05.227 ], 00:12:05.227 "product_name": "Malloc disk", 00:12:05.227 "block_size": 512, 00:12:05.227 "num_blocks": 65536, 00:12:05.227 "uuid": "24acddff-cd75-47da-a1ea-c3b644ec1f3d", 00:12:05.227 "assigned_rate_limits": { 00:12:05.227 "rw_ios_per_sec": 0, 00:12:05.227 "rw_mbytes_per_sec": 0, 00:12:05.227 "r_mbytes_per_sec": 0, 00:12:05.227 "w_mbytes_per_sec": 0 00:12:05.227 }, 00:12:05.227 "claimed": false, 00:12:05.227 "zoned": false, 00:12:05.227 "supported_io_types": { 00:12:05.227 "read": true, 00:12:05.227 "write": true, 00:12:05.227 "unmap": true, 00:12:05.227 "flush": true, 00:12:05.227 "reset": true, 00:12:05.227 "nvme_admin": false, 00:12:05.227 "nvme_io": false, 00:12:05.227 "nvme_io_md": false, 00:12:05.227 "write_zeroes": true, 00:12:05.227 "zcopy": true, 00:12:05.227 "get_zone_info": false, 00:12:05.227 "zone_management": false, 00:12:05.227 "zone_append": false, 00:12:05.227 "compare": false, 00:12:05.227 "compare_and_write": false, 00:12:05.227 "abort": true, 00:12:05.227 "seek_hole": false, 00:12:05.227 "seek_data": false, 00:12:05.227 "copy": true, 00:12:05.227 "nvme_iov_md": false 00:12:05.227 }, 00:12:05.227 "memory_domains": [ 00:12:05.227 { 00:12:05.227 "dma_device_id": "system", 00:12:05.227 "dma_device_type": 1 00:12:05.227 }, 00:12:05.227 { 00:12:05.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.227 "dma_device_type": 2 00:12:05.227 } 00:12:05.227 ], 00:12:05.227 "driver_specific": {} 00:12:05.227 } 00:12:05.227 ] 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.227 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.228 BaseBdev3 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.228 [ 00:12:05.228 { 00:12:05.228 "name": "BaseBdev3", 00:12:05.228 "aliases": [ 00:12:05.228 "9d315331-f5ab-4ad8-8a21-c55cdcbffd8b" 00:12:05.228 ], 00:12:05.228 "product_name": "Malloc disk", 00:12:05.228 "block_size": 512, 00:12:05.228 "num_blocks": 65536, 00:12:05.228 "uuid": "9d315331-f5ab-4ad8-8a21-c55cdcbffd8b", 00:12:05.228 "assigned_rate_limits": { 00:12:05.228 "rw_ios_per_sec": 0, 00:12:05.228 "rw_mbytes_per_sec": 0, 00:12:05.228 "r_mbytes_per_sec": 0, 00:12:05.228 "w_mbytes_per_sec": 0 00:12:05.228 }, 00:12:05.228 "claimed": false, 00:12:05.228 "zoned": false, 00:12:05.228 "supported_io_types": { 00:12:05.228 "read": true, 00:12:05.228 "write": true, 00:12:05.228 "unmap": true, 00:12:05.228 "flush": true, 00:12:05.228 "reset": true, 00:12:05.228 "nvme_admin": false, 00:12:05.228 "nvme_io": false, 00:12:05.228 "nvme_io_md": false, 00:12:05.228 "write_zeroes": true, 00:12:05.228 "zcopy": true, 00:12:05.228 "get_zone_info": false, 00:12:05.228 "zone_management": false, 00:12:05.228 "zone_append": false, 00:12:05.228 "compare": false, 00:12:05.228 "compare_and_write": false, 00:12:05.228 "abort": true, 00:12:05.228 "seek_hole": false, 00:12:05.228 "seek_data": false, 00:12:05.228 "copy": true, 00:12:05.228 "nvme_iov_md": false 00:12:05.228 }, 00:12:05.228 "memory_domains": [ 00:12:05.228 { 00:12:05.228 "dma_device_id": "system", 00:12:05.228 "dma_device_type": 1 00:12:05.228 }, 00:12:05.228 { 00:12:05.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.228 "dma_device_type": 2 00:12:05.228 } 00:12:05.228 ], 00:12:05.228 "driver_specific": {} 00:12:05.228 } 00:12:05.228 ] 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.228 [2024-12-06 15:38:48.453214] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.228 [2024-12-06 15:38:48.453388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.228 [2024-12-06 15:38:48.453584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.228 [2024-12-06 15:38:48.456031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.228 "name": "Existed_Raid", 00:12:05.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.228 "strip_size_kb": 64, 00:12:05.228 "state": "configuring", 00:12:05.228 "raid_level": "concat", 00:12:05.228 "superblock": false, 00:12:05.228 "num_base_bdevs": 3, 00:12:05.228 "num_base_bdevs_discovered": 2, 00:12:05.228 "num_base_bdevs_operational": 3, 00:12:05.228 "base_bdevs_list": [ 00:12:05.228 { 00:12:05.228 "name": "BaseBdev1", 00:12:05.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.228 "is_configured": false, 00:12:05.228 "data_offset": 0, 00:12:05.228 "data_size": 0 00:12:05.228 }, 00:12:05.228 { 00:12:05.228 "name": "BaseBdev2", 00:12:05.228 "uuid": "24acddff-cd75-47da-a1ea-c3b644ec1f3d", 00:12:05.228 "is_configured": true, 00:12:05.228 "data_offset": 0, 00:12:05.228 "data_size": 65536 00:12:05.228 }, 00:12:05.228 { 00:12:05.228 "name": "BaseBdev3", 00:12:05.228 "uuid": "9d315331-f5ab-4ad8-8a21-c55cdcbffd8b", 00:12:05.228 "is_configured": true, 00:12:05.228 "data_offset": 0, 00:12:05.228 "data_size": 65536 00:12:05.228 } 00:12:05.228 ] 00:12:05.228 }' 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.228 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.797 [2024-12-06 15:38:48.884721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.797 "name": "Existed_Raid", 00:12:05.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.797 "strip_size_kb": 64, 00:12:05.797 "state": "configuring", 00:12:05.797 "raid_level": "concat", 00:12:05.797 "superblock": false, 00:12:05.797 "num_base_bdevs": 3, 00:12:05.797 "num_base_bdevs_discovered": 1, 00:12:05.797 "num_base_bdevs_operational": 3, 00:12:05.797 "base_bdevs_list": [ 00:12:05.797 { 00:12:05.797 "name": "BaseBdev1", 00:12:05.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.797 "is_configured": false, 00:12:05.797 "data_offset": 0, 00:12:05.797 "data_size": 0 00:12:05.797 }, 00:12:05.797 { 00:12:05.797 "name": null, 00:12:05.797 "uuid": "24acddff-cd75-47da-a1ea-c3b644ec1f3d", 00:12:05.797 "is_configured": false, 00:12:05.797 "data_offset": 0, 00:12:05.797 "data_size": 65536 00:12:05.797 }, 00:12:05.797 { 00:12:05.797 "name": "BaseBdev3", 00:12:05.797 "uuid": "9d315331-f5ab-4ad8-8a21-c55cdcbffd8b", 00:12:05.797 "is_configured": true, 00:12:05.797 "data_offset": 0, 00:12:05.797 "data_size": 65536 00:12:05.797 } 00:12:05.797 ] 00:12:05.797 }' 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.797 15:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.057 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.057 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.057 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:06.057 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.057 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.057 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:06.057 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:06.057 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.057 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.320 [2024-12-06 15:38:49.385172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.320 BaseBdev1 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.320 [ 00:12:06.320 { 00:12:06.320 "name": "BaseBdev1", 00:12:06.320 "aliases": [ 00:12:06.320 "cc75a456-8c4c-4b24-b846-43ea4998313b" 00:12:06.320 ], 00:12:06.320 "product_name": "Malloc disk", 00:12:06.320 "block_size": 512, 00:12:06.320 "num_blocks": 65536, 00:12:06.320 "uuid": "cc75a456-8c4c-4b24-b846-43ea4998313b", 00:12:06.320 "assigned_rate_limits": { 00:12:06.320 "rw_ios_per_sec": 0, 00:12:06.320 "rw_mbytes_per_sec": 0, 00:12:06.320 "r_mbytes_per_sec": 0, 00:12:06.320 "w_mbytes_per_sec": 0 00:12:06.320 }, 00:12:06.320 "claimed": true, 00:12:06.320 "claim_type": "exclusive_write", 00:12:06.320 "zoned": false, 00:12:06.320 "supported_io_types": { 00:12:06.320 "read": true, 00:12:06.320 "write": true, 00:12:06.320 "unmap": true, 00:12:06.320 "flush": true, 00:12:06.320 "reset": true, 00:12:06.320 "nvme_admin": false, 00:12:06.320 "nvme_io": false, 00:12:06.320 "nvme_io_md": false, 00:12:06.320 "write_zeroes": true, 00:12:06.320 "zcopy": true, 00:12:06.320 "get_zone_info": false, 00:12:06.320 "zone_management": false, 00:12:06.320 "zone_append": false, 00:12:06.320 "compare": false, 00:12:06.320 "compare_and_write": false, 00:12:06.320 "abort": true, 00:12:06.320 "seek_hole": false, 00:12:06.320 "seek_data": false, 00:12:06.320 "copy": true, 00:12:06.320 "nvme_iov_md": false 00:12:06.320 }, 00:12:06.320 "memory_domains": [ 00:12:06.320 { 00:12:06.320 "dma_device_id": "system", 00:12:06.320 "dma_device_type": 1 00:12:06.320 }, 00:12:06.320 { 00:12:06.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.320 "dma_device_type": 2 00:12:06.320 } 00:12:06.320 ], 00:12:06.320 "driver_specific": {} 00:12:06.320 } 00:12:06.320 ] 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.320 "name": "Existed_Raid", 00:12:06.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.320 "strip_size_kb": 64, 00:12:06.320 "state": "configuring", 00:12:06.320 "raid_level": "concat", 00:12:06.320 "superblock": false, 00:12:06.320 "num_base_bdevs": 3, 00:12:06.320 "num_base_bdevs_discovered": 2, 00:12:06.320 "num_base_bdevs_operational": 3, 00:12:06.320 "base_bdevs_list": [ 00:12:06.320 { 00:12:06.320 "name": "BaseBdev1", 00:12:06.320 "uuid": "cc75a456-8c4c-4b24-b846-43ea4998313b", 00:12:06.320 "is_configured": true, 00:12:06.320 "data_offset": 0, 00:12:06.320 "data_size": 65536 00:12:06.320 }, 00:12:06.320 { 00:12:06.320 "name": null, 00:12:06.320 "uuid": "24acddff-cd75-47da-a1ea-c3b644ec1f3d", 00:12:06.320 "is_configured": false, 00:12:06.320 "data_offset": 0, 00:12:06.320 "data_size": 65536 00:12:06.320 }, 00:12:06.320 { 00:12:06.320 "name": "BaseBdev3", 00:12:06.320 "uuid": "9d315331-f5ab-4ad8-8a21-c55cdcbffd8b", 00:12:06.320 "is_configured": true, 00:12:06.320 "data_offset": 0, 00:12:06.320 "data_size": 65536 00:12:06.320 } 00:12:06.320 ] 00:12:06.320 }' 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.320 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.581 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.581 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:06.581 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.581 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.840 [2024-12-06 15:38:49.896672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.840 "name": "Existed_Raid", 00:12:06.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.840 "strip_size_kb": 64, 00:12:06.840 "state": "configuring", 00:12:06.840 "raid_level": "concat", 00:12:06.840 "superblock": false, 00:12:06.840 "num_base_bdevs": 3, 00:12:06.840 "num_base_bdevs_discovered": 1, 00:12:06.840 "num_base_bdevs_operational": 3, 00:12:06.840 "base_bdevs_list": [ 00:12:06.840 { 00:12:06.840 "name": "BaseBdev1", 00:12:06.840 "uuid": "cc75a456-8c4c-4b24-b846-43ea4998313b", 00:12:06.840 "is_configured": true, 00:12:06.840 "data_offset": 0, 00:12:06.840 "data_size": 65536 00:12:06.840 }, 00:12:06.840 { 00:12:06.840 "name": null, 00:12:06.840 "uuid": "24acddff-cd75-47da-a1ea-c3b644ec1f3d", 00:12:06.840 "is_configured": false, 00:12:06.840 "data_offset": 0, 00:12:06.840 "data_size": 65536 00:12:06.840 }, 00:12:06.840 { 00:12:06.840 "name": null, 00:12:06.840 "uuid": "9d315331-f5ab-4ad8-8a21-c55cdcbffd8b", 00:12:06.840 "is_configured": false, 00:12:06.840 "data_offset": 0, 00:12:06.840 "data_size": 65536 00:12:06.840 } 00:12:06.840 ] 00:12:06.840 }' 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.840 15:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.098 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.098 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.098 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.098 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:07.098 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.356 [2024-12-06 15:38:50.404051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.356 "name": "Existed_Raid", 00:12:07.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.356 "strip_size_kb": 64, 00:12:07.356 "state": "configuring", 00:12:07.356 "raid_level": "concat", 00:12:07.356 "superblock": false, 00:12:07.356 "num_base_bdevs": 3, 00:12:07.356 "num_base_bdevs_discovered": 2, 00:12:07.356 "num_base_bdevs_operational": 3, 00:12:07.356 "base_bdevs_list": [ 00:12:07.356 { 00:12:07.356 "name": "BaseBdev1", 00:12:07.356 "uuid": "cc75a456-8c4c-4b24-b846-43ea4998313b", 00:12:07.356 "is_configured": true, 00:12:07.356 "data_offset": 0, 00:12:07.356 "data_size": 65536 00:12:07.356 }, 00:12:07.356 { 00:12:07.356 "name": null, 00:12:07.356 "uuid": "24acddff-cd75-47da-a1ea-c3b644ec1f3d", 00:12:07.356 "is_configured": false, 00:12:07.356 "data_offset": 0, 00:12:07.356 "data_size": 65536 00:12:07.356 }, 00:12:07.356 { 00:12:07.356 "name": "BaseBdev3", 00:12:07.356 "uuid": "9d315331-f5ab-4ad8-8a21-c55cdcbffd8b", 00:12:07.356 "is_configured": true, 00:12:07.356 "data_offset": 0, 00:12:07.356 "data_size": 65536 00:12:07.356 } 00:12:07.356 ] 00:12:07.356 }' 00:12:07.356 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.357 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.615 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.615 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:07.615 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.615 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.615 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.615 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:07.615 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:07.615 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.615 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.615 [2024-12-06 15:38:50.887666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:07.874 15:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.874 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:07.874 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.874 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.874 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.874 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.874 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.874 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.874 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.874 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.874 15:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.874 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.874 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.874 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.874 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.874 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.874 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.874 "name": "Existed_Raid", 00:12:07.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.874 "strip_size_kb": 64, 00:12:07.874 "state": "configuring", 00:12:07.874 "raid_level": "concat", 00:12:07.874 "superblock": false, 00:12:07.874 "num_base_bdevs": 3, 00:12:07.874 "num_base_bdevs_discovered": 1, 00:12:07.874 "num_base_bdevs_operational": 3, 00:12:07.874 "base_bdevs_list": [ 00:12:07.874 { 00:12:07.874 "name": null, 00:12:07.874 "uuid": "cc75a456-8c4c-4b24-b846-43ea4998313b", 00:12:07.874 "is_configured": false, 00:12:07.874 "data_offset": 0, 00:12:07.874 "data_size": 65536 00:12:07.874 }, 00:12:07.874 { 00:12:07.874 "name": null, 00:12:07.874 "uuid": "24acddff-cd75-47da-a1ea-c3b644ec1f3d", 00:12:07.874 "is_configured": false, 00:12:07.874 "data_offset": 0, 00:12:07.874 "data_size": 65536 00:12:07.874 }, 00:12:07.874 { 00:12:07.874 "name": "BaseBdev3", 00:12:07.874 "uuid": "9d315331-f5ab-4ad8-8a21-c55cdcbffd8b", 00:12:07.874 "is_configured": true, 00:12:07.874 "data_offset": 0, 00:12:07.874 "data_size": 65536 00:12:07.874 } 00:12:07.874 ] 00:12:07.874 }' 00:12:07.874 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.874 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.132 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.132 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:08.132 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.132 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.132 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.391 [2024-12-06 15:38:51.441192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.391 "name": "Existed_Raid", 00:12:08.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.391 "strip_size_kb": 64, 00:12:08.391 "state": "configuring", 00:12:08.391 "raid_level": "concat", 00:12:08.391 "superblock": false, 00:12:08.391 "num_base_bdevs": 3, 00:12:08.391 "num_base_bdevs_discovered": 2, 00:12:08.391 "num_base_bdevs_operational": 3, 00:12:08.391 "base_bdevs_list": [ 00:12:08.391 { 00:12:08.391 "name": null, 00:12:08.391 "uuid": "cc75a456-8c4c-4b24-b846-43ea4998313b", 00:12:08.391 "is_configured": false, 00:12:08.391 "data_offset": 0, 00:12:08.391 "data_size": 65536 00:12:08.391 }, 00:12:08.391 { 00:12:08.391 "name": "BaseBdev2", 00:12:08.391 "uuid": "24acddff-cd75-47da-a1ea-c3b644ec1f3d", 00:12:08.391 "is_configured": true, 00:12:08.391 "data_offset": 0, 00:12:08.391 "data_size": 65536 00:12:08.391 }, 00:12:08.391 { 00:12:08.391 "name": "BaseBdev3", 00:12:08.391 "uuid": "9d315331-f5ab-4ad8-8a21-c55cdcbffd8b", 00:12:08.391 "is_configured": true, 00:12:08.391 "data_offset": 0, 00:12:08.391 "data_size": 65536 00:12:08.391 } 00:12:08.391 ] 00:12:08.391 }' 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.391 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.649 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:08.649 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.649 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.649 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.649 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.649 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:08.649 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.649 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:08.649 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.649 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.649 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.649 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cc75a456-8c4c-4b24-b846-43ea4998313b 00:12:08.649 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.650 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.908 [2024-12-06 15:38:51.957904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:08.908 [2024-12-06 15:38:51.957956] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:08.908 [2024-12-06 15:38:51.957968] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:08.908 [2024-12-06 15:38:51.958291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:08.908 [2024-12-06 15:38:51.958475] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:08.908 [2024-12-06 15:38:51.958485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:08.908 [2024-12-06 15:38:51.958801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.908 NewBaseBdev 00:12:08.909 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.909 15:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:08.909 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:08.909 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.909 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:08.909 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.909 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.909 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.909 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.909 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.909 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.909 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:08.909 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.909 15:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.909 [ 00:12:08.909 { 00:12:08.909 "name": "NewBaseBdev", 00:12:08.909 "aliases": [ 00:12:08.909 "cc75a456-8c4c-4b24-b846-43ea4998313b" 00:12:08.909 ], 00:12:08.909 "product_name": "Malloc disk", 00:12:08.909 "block_size": 512, 00:12:08.909 "num_blocks": 65536, 00:12:08.909 "uuid": "cc75a456-8c4c-4b24-b846-43ea4998313b", 00:12:08.909 "assigned_rate_limits": { 00:12:08.909 "rw_ios_per_sec": 0, 00:12:08.909 "rw_mbytes_per_sec": 0, 00:12:08.909 "r_mbytes_per_sec": 0, 00:12:08.909 "w_mbytes_per_sec": 0 00:12:08.909 }, 00:12:08.909 "claimed": true, 00:12:08.909 "claim_type": "exclusive_write", 00:12:08.909 "zoned": false, 00:12:08.909 "supported_io_types": { 00:12:08.909 "read": true, 00:12:08.909 "write": true, 00:12:08.909 "unmap": true, 00:12:08.909 "flush": true, 00:12:08.909 "reset": true, 00:12:08.909 "nvme_admin": false, 00:12:08.909 "nvme_io": false, 00:12:08.909 "nvme_io_md": false, 00:12:08.909 "write_zeroes": true, 00:12:08.909 "zcopy": true, 00:12:08.909 "get_zone_info": false, 00:12:08.909 "zone_management": false, 00:12:08.909 "zone_append": false, 00:12:08.909 "compare": false, 00:12:08.909 "compare_and_write": false, 00:12:08.909 "abort": true, 00:12:08.909 "seek_hole": false, 00:12:08.909 "seek_data": false, 00:12:08.909 "copy": true, 00:12:08.909 "nvme_iov_md": false 00:12:08.909 }, 00:12:08.909 "memory_domains": [ 00:12:08.909 { 00:12:08.909 "dma_device_id": "system", 00:12:08.909 "dma_device_type": 1 00:12:08.909 }, 00:12:08.909 { 00:12:08.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.909 "dma_device_type": 2 00:12:08.909 } 00:12:08.909 ], 00:12:08.909 "driver_specific": {} 00:12:08.909 } 00:12:08.909 ] 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.909 "name": "Existed_Raid", 00:12:08.909 "uuid": "00d55c12-6839-4799-869e-098ca715cfac", 00:12:08.909 "strip_size_kb": 64, 00:12:08.909 "state": "online", 00:12:08.909 "raid_level": "concat", 00:12:08.909 "superblock": false, 00:12:08.909 "num_base_bdevs": 3, 00:12:08.909 "num_base_bdevs_discovered": 3, 00:12:08.909 "num_base_bdevs_operational": 3, 00:12:08.909 "base_bdevs_list": [ 00:12:08.909 { 00:12:08.909 "name": "NewBaseBdev", 00:12:08.909 "uuid": "cc75a456-8c4c-4b24-b846-43ea4998313b", 00:12:08.909 "is_configured": true, 00:12:08.909 "data_offset": 0, 00:12:08.909 "data_size": 65536 00:12:08.909 }, 00:12:08.909 { 00:12:08.909 "name": "BaseBdev2", 00:12:08.909 "uuid": "24acddff-cd75-47da-a1ea-c3b644ec1f3d", 00:12:08.909 "is_configured": true, 00:12:08.909 "data_offset": 0, 00:12:08.909 "data_size": 65536 00:12:08.909 }, 00:12:08.909 { 00:12:08.909 "name": "BaseBdev3", 00:12:08.909 "uuid": "9d315331-f5ab-4ad8-8a21-c55cdcbffd8b", 00:12:08.909 "is_configured": true, 00:12:08.909 "data_offset": 0, 00:12:08.909 "data_size": 65536 00:12:08.909 } 00:12:08.909 ] 00:12:08.909 }' 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.909 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.168 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:09.168 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:09.168 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:09.168 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:09.168 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:09.168 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:09.168 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:09.168 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:09.168 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.168 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.168 [2024-12-06 15:38:52.409699] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.168 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.168 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:09.168 "name": "Existed_Raid", 00:12:09.168 "aliases": [ 00:12:09.168 "00d55c12-6839-4799-869e-098ca715cfac" 00:12:09.168 ], 00:12:09.168 "product_name": "Raid Volume", 00:12:09.168 "block_size": 512, 00:12:09.168 "num_blocks": 196608, 00:12:09.168 "uuid": "00d55c12-6839-4799-869e-098ca715cfac", 00:12:09.168 "assigned_rate_limits": { 00:12:09.168 "rw_ios_per_sec": 0, 00:12:09.168 "rw_mbytes_per_sec": 0, 00:12:09.168 "r_mbytes_per_sec": 0, 00:12:09.168 "w_mbytes_per_sec": 0 00:12:09.168 }, 00:12:09.168 "claimed": false, 00:12:09.168 "zoned": false, 00:12:09.168 "supported_io_types": { 00:12:09.168 "read": true, 00:12:09.168 "write": true, 00:12:09.168 "unmap": true, 00:12:09.168 "flush": true, 00:12:09.168 "reset": true, 00:12:09.168 "nvme_admin": false, 00:12:09.168 "nvme_io": false, 00:12:09.168 "nvme_io_md": false, 00:12:09.168 "write_zeroes": true, 00:12:09.168 "zcopy": false, 00:12:09.168 "get_zone_info": false, 00:12:09.168 "zone_management": false, 00:12:09.168 "zone_append": false, 00:12:09.168 "compare": false, 00:12:09.168 "compare_and_write": false, 00:12:09.168 "abort": false, 00:12:09.168 "seek_hole": false, 00:12:09.168 "seek_data": false, 00:12:09.168 "copy": false, 00:12:09.168 "nvme_iov_md": false 00:12:09.168 }, 00:12:09.168 "memory_domains": [ 00:12:09.168 { 00:12:09.168 "dma_device_id": "system", 00:12:09.168 "dma_device_type": 1 00:12:09.168 }, 00:12:09.168 { 00:12:09.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.168 "dma_device_type": 2 00:12:09.168 }, 00:12:09.168 { 00:12:09.168 "dma_device_id": "system", 00:12:09.168 "dma_device_type": 1 00:12:09.168 }, 00:12:09.168 { 00:12:09.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.168 "dma_device_type": 2 00:12:09.168 }, 00:12:09.168 { 00:12:09.168 "dma_device_id": "system", 00:12:09.168 "dma_device_type": 1 00:12:09.168 }, 00:12:09.168 { 00:12:09.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.168 "dma_device_type": 2 00:12:09.168 } 00:12:09.168 ], 00:12:09.168 "driver_specific": { 00:12:09.168 "raid": { 00:12:09.168 "uuid": "00d55c12-6839-4799-869e-098ca715cfac", 00:12:09.168 "strip_size_kb": 64, 00:12:09.169 "state": "online", 00:12:09.169 "raid_level": "concat", 00:12:09.169 "superblock": false, 00:12:09.169 "num_base_bdevs": 3, 00:12:09.169 "num_base_bdevs_discovered": 3, 00:12:09.169 "num_base_bdevs_operational": 3, 00:12:09.169 "base_bdevs_list": [ 00:12:09.169 { 00:12:09.169 "name": "NewBaseBdev", 00:12:09.169 "uuid": "cc75a456-8c4c-4b24-b846-43ea4998313b", 00:12:09.169 "is_configured": true, 00:12:09.169 "data_offset": 0, 00:12:09.169 "data_size": 65536 00:12:09.169 }, 00:12:09.169 { 00:12:09.169 "name": "BaseBdev2", 00:12:09.169 "uuid": "24acddff-cd75-47da-a1ea-c3b644ec1f3d", 00:12:09.169 "is_configured": true, 00:12:09.169 "data_offset": 0, 00:12:09.169 "data_size": 65536 00:12:09.169 }, 00:12:09.169 { 00:12:09.169 "name": "BaseBdev3", 00:12:09.169 "uuid": "9d315331-f5ab-4ad8-8a21-c55cdcbffd8b", 00:12:09.169 "is_configured": true, 00:12:09.169 "data_offset": 0, 00:12:09.169 "data_size": 65536 00:12:09.169 } 00:12:09.169 ] 00:12:09.169 } 00:12:09.169 } 00:12:09.169 }' 00:12:09.169 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:09.426 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:09.427 BaseBdev2 00:12:09.427 BaseBdev3' 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.427 [2024-12-06 15:38:52.680949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.427 [2024-12-06 15:38:52.680989] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:09.427 [2024-12-06 15:38:52.681094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.427 [2024-12-06 15:38:52.681163] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.427 [2024-12-06 15:38:52.681179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65636 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65636 ']' 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65636 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.427 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65636 00:12:09.685 killing process with pid 65636 00:12:09.685 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.685 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.685 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65636' 00:12:09.685 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65636 00:12:09.685 [2024-12-06 15:38:52.735400] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:09.685 15:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65636 00:12:09.943 [2024-12-06 15:38:53.072744] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:11.318 00:12:11.318 real 0m10.568s 00:12:11.318 user 0m16.365s 00:12:11.318 sys 0m2.249s 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.318 ************************************ 00:12:11.318 END TEST raid_state_function_test 00:12:11.318 ************************************ 00:12:11.318 15:38:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:11.318 15:38:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:11.318 15:38:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.318 15:38:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.318 ************************************ 00:12:11.318 START TEST raid_state_function_test_sb 00:12:11.318 ************************************ 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66263 00:12:11.318 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:11.318 Process raid pid: 66263 00:12:11.319 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66263' 00:12:11.319 15:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66263 00:12:11.319 15:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66263 ']' 00:12:11.319 15:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.319 15:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.319 15:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.319 15:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.319 15:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.319 [2024-12-06 15:38:54.526704] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:12:11.319 [2024-12-06 15:38:54.526858] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.577 [2024-12-06 15:38:54.699337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.577 [2024-12-06 15:38:54.841209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.835 [2024-12-06 15:38:55.095551] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.835 [2024-12-06 15:38:55.095616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.092 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.092 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:12.092 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:12.092 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.092 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.092 [2024-12-06 15:38:55.382464] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.092 [2024-12-06 15:38:55.382546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.092 [2024-12-06 15:38:55.382559] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.092 [2024-12-06 15:38:55.382573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.092 [2024-12-06 15:38:55.382580] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.092 [2024-12-06 15:38:55.382593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.350 "name": "Existed_Raid", 00:12:12.350 "uuid": "16ec919b-7192-4498-b2ec-c39dca43db21", 00:12:12.350 "strip_size_kb": 64, 00:12:12.350 "state": "configuring", 00:12:12.350 "raid_level": "concat", 00:12:12.350 "superblock": true, 00:12:12.350 "num_base_bdevs": 3, 00:12:12.350 "num_base_bdevs_discovered": 0, 00:12:12.350 "num_base_bdevs_operational": 3, 00:12:12.350 "base_bdevs_list": [ 00:12:12.350 { 00:12:12.350 "name": "BaseBdev1", 00:12:12.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.350 "is_configured": false, 00:12:12.350 "data_offset": 0, 00:12:12.350 "data_size": 0 00:12:12.350 }, 00:12:12.350 { 00:12:12.350 "name": "BaseBdev2", 00:12:12.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.350 "is_configured": false, 00:12:12.350 "data_offset": 0, 00:12:12.350 "data_size": 0 00:12:12.350 }, 00:12:12.350 { 00:12:12.350 "name": "BaseBdev3", 00:12:12.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.350 "is_configured": false, 00:12:12.350 "data_offset": 0, 00:12:12.350 "data_size": 0 00:12:12.350 } 00:12:12.350 ] 00:12:12.350 }' 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.350 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.608 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:12.608 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.608 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.608 [2024-12-06 15:38:55.789969] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.608 [2024-12-06 15:38:55.790017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:12.608 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.608 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:12.608 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.608 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.608 [2024-12-06 15:38:55.801955] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.608 [2024-12-06 15:38:55.802013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.608 [2024-12-06 15:38:55.802025] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.608 [2024-12-06 15:38:55.802039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.608 [2024-12-06 15:38:55.802047] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.608 [2024-12-06 15:38:55.802059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.608 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.608 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:12.608 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.609 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.609 [2024-12-06 15:38:55.858842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.609 BaseBdev1 00:12:12.609 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.609 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:12.609 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:12.609 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.609 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:12.609 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.609 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.609 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.609 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.609 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.609 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.609 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:12.609 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.609 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.609 [ 00:12:12.609 { 00:12:12.609 "name": "BaseBdev1", 00:12:12.609 "aliases": [ 00:12:12.609 "0d8d4326-ce26-4ccb-aea5-9f85544cfae8" 00:12:12.609 ], 00:12:12.609 "product_name": "Malloc disk", 00:12:12.609 "block_size": 512, 00:12:12.609 "num_blocks": 65536, 00:12:12.609 "uuid": "0d8d4326-ce26-4ccb-aea5-9f85544cfae8", 00:12:12.609 "assigned_rate_limits": { 00:12:12.609 "rw_ios_per_sec": 0, 00:12:12.609 "rw_mbytes_per_sec": 0, 00:12:12.609 "r_mbytes_per_sec": 0, 00:12:12.609 "w_mbytes_per_sec": 0 00:12:12.609 }, 00:12:12.609 "claimed": true, 00:12:12.609 "claim_type": "exclusive_write", 00:12:12.609 "zoned": false, 00:12:12.609 "supported_io_types": { 00:12:12.609 "read": true, 00:12:12.609 "write": true, 00:12:12.609 "unmap": true, 00:12:12.609 "flush": true, 00:12:12.609 "reset": true, 00:12:12.609 "nvme_admin": false, 00:12:12.609 "nvme_io": false, 00:12:12.609 "nvme_io_md": false, 00:12:12.609 "write_zeroes": true, 00:12:12.609 "zcopy": true, 00:12:12.609 "get_zone_info": false, 00:12:12.868 "zone_management": false, 00:12:12.868 "zone_append": false, 00:12:12.868 "compare": false, 00:12:12.868 "compare_and_write": false, 00:12:12.868 "abort": true, 00:12:12.868 "seek_hole": false, 00:12:12.868 "seek_data": false, 00:12:12.868 "copy": true, 00:12:12.868 "nvme_iov_md": false 00:12:12.868 }, 00:12:12.868 "memory_domains": [ 00:12:12.868 { 00:12:12.868 "dma_device_id": "system", 00:12:12.868 "dma_device_type": 1 00:12:12.868 }, 00:12:12.868 { 00:12:12.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.868 "dma_device_type": 2 00:12:12.868 } 00:12:12.868 ], 00:12:12.868 "driver_specific": {} 00:12:12.868 } 00:12:12.868 ] 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.868 "name": "Existed_Raid", 00:12:12.868 "uuid": "38e8592f-b648-44c4-a6ab-a91949a1d4c4", 00:12:12.868 "strip_size_kb": 64, 00:12:12.868 "state": "configuring", 00:12:12.868 "raid_level": "concat", 00:12:12.868 "superblock": true, 00:12:12.868 "num_base_bdevs": 3, 00:12:12.868 "num_base_bdevs_discovered": 1, 00:12:12.868 "num_base_bdevs_operational": 3, 00:12:12.868 "base_bdevs_list": [ 00:12:12.868 { 00:12:12.868 "name": "BaseBdev1", 00:12:12.868 "uuid": "0d8d4326-ce26-4ccb-aea5-9f85544cfae8", 00:12:12.868 "is_configured": true, 00:12:12.868 "data_offset": 2048, 00:12:12.868 "data_size": 63488 00:12:12.868 }, 00:12:12.868 { 00:12:12.868 "name": "BaseBdev2", 00:12:12.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.868 "is_configured": false, 00:12:12.868 "data_offset": 0, 00:12:12.868 "data_size": 0 00:12:12.868 }, 00:12:12.868 { 00:12:12.868 "name": "BaseBdev3", 00:12:12.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.868 "is_configured": false, 00:12:12.868 "data_offset": 0, 00:12:12.868 "data_size": 0 00:12:12.868 } 00:12:12.868 ] 00:12:12.868 }' 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.868 15:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.128 [2024-12-06 15:38:56.306482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.128 [2024-12-06 15:38:56.306568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.128 [2024-12-06 15:38:56.318569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.128 [2024-12-06 15:38:56.321088] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.128 [2024-12-06 15:38:56.321372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.128 [2024-12-06 15:38:56.321400] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.128 [2024-12-06 15:38:56.321418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.128 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.128 "name": "Existed_Raid", 00:12:13.129 "uuid": "9dad028a-b91f-47f8-a0aa-65982007f674", 00:12:13.129 "strip_size_kb": 64, 00:12:13.129 "state": "configuring", 00:12:13.129 "raid_level": "concat", 00:12:13.129 "superblock": true, 00:12:13.129 "num_base_bdevs": 3, 00:12:13.129 "num_base_bdevs_discovered": 1, 00:12:13.129 "num_base_bdevs_operational": 3, 00:12:13.129 "base_bdevs_list": [ 00:12:13.129 { 00:12:13.129 "name": "BaseBdev1", 00:12:13.129 "uuid": "0d8d4326-ce26-4ccb-aea5-9f85544cfae8", 00:12:13.129 "is_configured": true, 00:12:13.129 "data_offset": 2048, 00:12:13.129 "data_size": 63488 00:12:13.129 }, 00:12:13.129 { 00:12:13.129 "name": "BaseBdev2", 00:12:13.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.129 "is_configured": false, 00:12:13.129 "data_offset": 0, 00:12:13.129 "data_size": 0 00:12:13.129 }, 00:12:13.129 { 00:12:13.129 "name": "BaseBdev3", 00:12:13.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.129 "is_configured": false, 00:12:13.129 "data_offset": 0, 00:12:13.129 "data_size": 0 00:12:13.129 } 00:12:13.129 ] 00:12:13.129 }' 00:12:13.129 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.129 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.753 [2024-12-06 15:38:56.772316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.753 BaseBdev2 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.753 [ 00:12:13.753 { 00:12:13.753 "name": "BaseBdev2", 00:12:13.753 "aliases": [ 00:12:13.753 "70a48263-8ed4-4ccd-8a5a-a6d527ac4fb5" 00:12:13.753 ], 00:12:13.753 "product_name": "Malloc disk", 00:12:13.753 "block_size": 512, 00:12:13.753 "num_blocks": 65536, 00:12:13.753 "uuid": "70a48263-8ed4-4ccd-8a5a-a6d527ac4fb5", 00:12:13.753 "assigned_rate_limits": { 00:12:13.753 "rw_ios_per_sec": 0, 00:12:13.753 "rw_mbytes_per_sec": 0, 00:12:13.753 "r_mbytes_per_sec": 0, 00:12:13.753 "w_mbytes_per_sec": 0 00:12:13.753 }, 00:12:13.753 "claimed": true, 00:12:13.753 "claim_type": "exclusive_write", 00:12:13.753 "zoned": false, 00:12:13.753 "supported_io_types": { 00:12:13.753 "read": true, 00:12:13.753 "write": true, 00:12:13.753 "unmap": true, 00:12:13.753 "flush": true, 00:12:13.753 "reset": true, 00:12:13.753 "nvme_admin": false, 00:12:13.753 "nvme_io": false, 00:12:13.753 "nvme_io_md": false, 00:12:13.753 "write_zeroes": true, 00:12:13.753 "zcopy": true, 00:12:13.753 "get_zone_info": false, 00:12:13.753 "zone_management": false, 00:12:13.753 "zone_append": false, 00:12:13.753 "compare": false, 00:12:13.753 "compare_and_write": false, 00:12:13.753 "abort": true, 00:12:13.753 "seek_hole": false, 00:12:13.753 "seek_data": false, 00:12:13.753 "copy": true, 00:12:13.753 "nvme_iov_md": false 00:12:13.753 }, 00:12:13.753 "memory_domains": [ 00:12:13.753 { 00:12:13.753 "dma_device_id": "system", 00:12:13.753 "dma_device_type": 1 00:12:13.753 }, 00:12:13.753 { 00:12:13.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.753 "dma_device_type": 2 00:12:13.753 } 00:12:13.753 ], 00:12:13.753 "driver_specific": {} 00:12:13.753 } 00:12:13.753 ] 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.753 "name": "Existed_Raid", 00:12:13.753 "uuid": "9dad028a-b91f-47f8-a0aa-65982007f674", 00:12:13.753 "strip_size_kb": 64, 00:12:13.753 "state": "configuring", 00:12:13.753 "raid_level": "concat", 00:12:13.753 "superblock": true, 00:12:13.753 "num_base_bdevs": 3, 00:12:13.753 "num_base_bdevs_discovered": 2, 00:12:13.753 "num_base_bdevs_operational": 3, 00:12:13.753 "base_bdevs_list": [ 00:12:13.753 { 00:12:13.753 "name": "BaseBdev1", 00:12:13.753 "uuid": "0d8d4326-ce26-4ccb-aea5-9f85544cfae8", 00:12:13.753 "is_configured": true, 00:12:13.753 "data_offset": 2048, 00:12:13.753 "data_size": 63488 00:12:13.753 }, 00:12:13.753 { 00:12:13.753 "name": "BaseBdev2", 00:12:13.753 "uuid": "70a48263-8ed4-4ccd-8a5a-a6d527ac4fb5", 00:12:13.753 "is_configured": true, 00:12:13.753 "data_offset": 2048, 00:12:13.753 "data_size": 63488 00:12:13.753 }, 00:12:13.753 { 00:12:13.753 "name": "BaseBdev3", 00:12:13.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.753 "is_configured": false, 00:12:13.753 "data_offset": 0, 00:12:13.753 "data_size": 0 00:12:13.753 } 00:12:13.753 ] 00:12:13.753 }' 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.753 15:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.013 [2024-12-06 15:38:57.282691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.013 [2024-12-06 15:38:57.283049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:14.013 [2024-12-06 15:38:57.283079] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:14.013 [2024-12-06 15:38:57.283413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:14.013 [2024-12-06 15:38:57.283620] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:14.013 [2024-12-06 15:38:57.283634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:14.013 BaseBdev3 00:12:14.013 [2024-12-06 15:38:57.283806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.013 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.272 [ 00:12:14.272 { 00:12:14.272 "name": "BaseBdev3", 00:12:14.272 "aliases": [ 00:12:14.272 "39c04ef1-c376-4328-99e1-0061060dab92" 00:12:14.272 ], 00:12:14.272 "product_name": "Malloc disk", 00:12:14.272 "block_size": 512, 00:12:14.272 "num_blocks": 65536, 00:12:14.272 "uuid": "39c04ef1-c376-4328-99e1-0061060dab92", 00:12:14.272 "assigned_rate_limits": { 00:12:14.272 "rw_ios_per_sec": 0, 00:12:14.272 "rw_mbytes_per_sec": 0, 00:12:14.272 "r_mbytes_per_sec": 0, 00:12:14.272 "w_mbytes_per_sec": 0 00:12:14.272 }, 00:12:14.272 "claimed": true, 00:12:14.272 "claim_type": "exclusive_write", 00:12:14.272 "zoned": false, 00:12:14.272 "supported_io_types": { 00:12:14.272 "read": true, 00:12:14.272 "write": true, 00:12:14.272 "unmap": true, 00:12:14.272 "flush": true, 00:12:14.272 "reset": true, 00:12:14.272 "nvme_admin": false, 00:12:14.272 "nvme_io": false, 00:12:14.272 "nvme_io_md": false, 00:12:14.272 "write_zeroes": true, 00:12:14.272 "zcopy": true, 00:12:14.272 "get_zone_info": false, 00:12:14.272 "zone_management": false, 00:12:14.272 "zone_append": false, 00:12:14.272 "compare": false, 00:12:14.272 "compare_and_write": false, 00:12:14.272 "abort": true, 00:12:14.272 "seek_hole": false, 00:12:14.272 "seek_data": false, 00:12:14.273 "copy": true, 00:12:14.273 "nvme_iov_md": false 00:12:14.273 }, 00:12:14.273 "memory_domains": [ 00:12:14.273 { 00:12:14.273 "dma_device_id": "system", 00:12:14.273 "dma_device_type": 1 00:12:14.273 }, 00:12:14.273 { 00:12:14.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.273 "dma_device_type": 2 00:12:14.273 } 00:12:14.273 ], 00:12:14.273 "driver_specific": {} 00:12:14.273 } 00:12:14.273 ] 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.273 "name": "Existed_Raid", 00:12:14.273 "uuid": "9dad028a-b91f-47f8-a0aa-65982007f674", 00:12:14.273 "strip_size_kb": 64, 00:12:14.273 "state": "online", 00:12:14.273 "raid_level": "concat", 00:12:14.273 "superblock": true, 00:12:14.273 "num_base_bdevs": 3, 00:12:14.273 "num_base_bdevs_discovered": 3, 00:12:14.273 "num_base_bdevs_operational": 3, 00:12:14.273 "base_bdevs_list": [ 00:12:14.273 { 00:12:14.273 "name": "BaseBdev1", 00:12:14.273 "uuid": "0d8d4326-ce26-4ccb-aea5-9f85544cfae8", 00:12:14.273 "is_configured": true, 00:12:14.273 "data_offset": 2048, 00:12:14.273 "data_size": 63488 00:12:14.273 }, 00:12:14.273 { 00:12:14.273 "name": "BaseBdev2", 00:12:14.273 "uuid": "70a48263-8ed4-4ccd-8a5a-a6d527ac4fb5", 00:12:14.273 "is_configured": true, 00:12:14.273 "data_offset": 2048, 00:12:14.273 "data_size": 63488 00:12:14.273 }, 00:12:14.273 { 00:12:14.273 "name": "BaseBdev3", 00:12:14.273 "uuid": "39c04ef1-c376-4328-99e1-0061060dab92", 00:12:14.273 "is_configured": true, 00:12:14.273 "data_offset": 2048, 00:12:14.273 "data_size": 63488 00:12:14.273 } 00:12:14.273 ] 00:12:14.273 }' 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.273 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.533 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:14.533 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:14.533 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:14.533 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:14.533 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:14.533 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:14.533 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:14.533 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.533 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.533 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:14.533 [2024-12-06 15:38:57.710667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.533 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.533 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:14.533 "name": "Existed_Raid", 00:12:14.533 "aliases": [ 00:12:14.533 "9dad028a-b91f-47f8-a0aa-65982007f674" 00:12:14.533 ], 00:12:14.533 "product_name": "Raid Volume", 00:12:14.534 "block_size": 512, 00:12:14.534 "num_blocks": 190464, 00:12:14.534 "uuid": "9dad028a-b91f-47f8-a0aa-65982007f674", 00:12:14.534 "assigned_rate_limits": { 00:12:14.534 "rw_ios_per_sec": 0, 00:12:14.534 "rw_mbytes_per_sec": 0, 00:12:14.534 "r_mbytes_per_sec": 0, 00:12:14.534 "w_mbytes_per_sec": 0 00:12:14.534 }, 00:12:14.534 "claimed": false, 00:12:14.534 "zoned": false, 00:12:14.534 "supported_io_types": { 00:12:14.534 "read": true, 00:12:14.534 "write": true, 00:12:14.534 "unmap": true, 00:12:14.534 "flush": true, 00:12:14.534 "reset": true, 00:12:14.534 "nvme_admin": false, 00:12:14.534 "nvme_io": false, 00:12:14.534 "nvme_io_md": false, 00:12:14.534 "write_zeroes": true, 00:12:14.534 "zcopy": false, 00:12:14.534 "get_zone_info": false, 00:12:14.534 "zone_management": false, 00:12:14.534 "zone_append": false, 00:12:14.534 "compare": false, 00:12:14.534 "compare_and_write": false, 00:12:14.534 "abort": false, 00:12:14.534 "seek_hole": false, 00:12:14.534 "seek_data": false, 00:12:14.534 "copy": false, 00:12:14.534 "nvme_iov_md": false 00:12:14.534 }, 00:12:14.534 "memory_domains": [ 00:12:14.534 { 00:12:14.534 "dma_device_id": "system", 00:12:14.534 "dma_device_type": 1 00:12:14.534 }, 00:12:14.534 { 00:12:14.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.534 "dma_device_type": 2 00:12:14.534 }, 00:12:14.534 { 00:12:14.534 "dma_device_id": "system", 00:12:14.534 "dma_device_type": 1 00:12:14.534 }, 00:12:14.534 { 00:12:14.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.534 "dma_device_type": 2 00:12:14.534 }, 00:12:14.534 { 00:12:14.534 "dma_device_id": "system", 00:12:14.534 "dma_device_type": 1 00:12:14.534 }, 00:12:14.534 { 00:12:14.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.534 "dma_device_type": 2 00:12:14.534 } 00:12:14.534 ], 00:12:14.534 "driver_specific": { 00:12:14.534 "raid": { 00:12:14.534 "uuid": "9dad028a-b91f-47f8-a0aa-65982007f674", 00:12:14.534 "strip_size_kb": 64, 00:12:14.534 "state": "online", 00:12:14.534 "raid_level": "concat", 00:12:14.534 "superblock": true, 00:12:14.534 "num_base_bdevs": 3, 00:12:14.534 "num_base_bdevs_discovered": 3, 00:12:14.534 "num_base_bdevs_operational": 3, 00:12:14.534 "base_bdevs_list": [ 00:12:14.534 { 00:12:14.534 "name": "BaseBdev1", 00:12:14.534 "uuid": "0d8d4326-ce26-4ccb-aea5-9f85544cfae8", 00:12:14.534 "is_configured": true, 00:12:14.534 "data_offset": 2048, 00:12:14.534 "data_size": 63488 00:12:14.534 }, 00:12:14.534 { 00:12:14.534 "name": "BaseBdev2", 00:12:14.534 "uuid": "70a48263-8ed4-4ccd-8a5a-a6d527ac4fb5", 00:12:14.534 "is_configured": true, 00:12:14.534 "data_offset": 2048, 00:12:14.534 "data_size": 63488 00:12:14.534 }, 00:12:14.534 { 00:12:14.534 "name": "BaseBdev3", 00:12:14.534 "uuid": "39c04ef1-c376-4328-99e1-0061060dab92", 00:12:14.534 "is_configured": true, 00:12:14.534 "data_offset": 2048, 00:12:14.534 "data_size": 63488 00:12:14.534 } 00:12:14.534 ] 00:12:14.534 } 00:12:14.534 } 00:12:14.534 }' 00:12:14.534 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.534 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:14.534 BaseBdev2 00:12:14.534 BaseBdev3' 00:12:14.534 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.794 15:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.794 [2024-12-06 15:38:57.990346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:14.794 [2024-12-06 15:38:57.990385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.794 [2024-12-06 15:38:57.990456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.054 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.054 "name": "Existed_Raid", 00:12:15.054 "uuid": "9dad028a-b91f-47f8-a0aa-65982007f674", 00:12:15.054 "strip_size_kb": 64, 00:12:15.054 "state": "offline", 00:12:15.054 "raid_level": "concat", 00:12:15.054 "superblock": true, 00:12:15.054 "num_base_bdevs": 3, 00:12:15.054 "num_base_bdevs_discovered": 2, 00:12:15.054 "num_base_bdevs_operational": 2, 00:12:15.054 "base_bdevs_list": [ 00:12:15.054 { 00:12:15.054 "name": null, 00:12:15.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.054 "is_configured": false, 00:12:15.054 "data_offset": 0, 00:12:15.054 "data_size": 63488 00:12:15.054 }, 00:12:15.054 { 00:12:15.054 "name": "BaseBdev2", 00:12:15.054 "uuid": "70a48263-8ed4-4ccd-8a5a-a6d527ac4fb5", 00:12:15.054 "is_configured": true, 00:12:15.054 "data_offset": 2048, 00:12:15.054 "data_size": 63488 00:12:15.054 }, 00:12:15.054 { 00:12:15.054 "name": "BaseBdev3", 00:12:15.054 "uuid": "39c04ef1-c376-4328-99e1-0061060dab92", 00:12:15.054 "is_configured": true, 00:12:15.054 "data_offset": 2048, 00:12:15.054 "data_size": 63488 00:12:15.054 } 00:12:15.054 ] 00:12:15.054 }' 00:12:15.055 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.055 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.314 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:15.314 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.314 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.314 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:15.314 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.314 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.314 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.314 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:15.314 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:15.314 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:15.314 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.314 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.314 [2024-12-06 15:38:58.560760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:15.574 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.574 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:15.574 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.574 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:15.574 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.574 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.574 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.574 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.574 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:15.574 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:15.574 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:15.574 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.574 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.574 [2024-12-06 15:38:58.709308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:15.574 [2024-12-06 15:38:58.709373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:15.574 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.574 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:15.575 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.575 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.575 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.575 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:15.575 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.575 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.575 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:15.575 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:15.575 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:15.575 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:15.575 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.575 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:15.575 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.575 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.834 BaseBdev2 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.834 [ 00:12:15.834 { 00:12:15.834 "name": "BaseBdev2", 00:12:15.834 "aliases": [ 00:12:15.834 "9e2a0bf2-4a91-4f8e-a67d-aadac7f4f87a" 00:12:15.834 ], 00:12:15.834 "product_name": "Malloc disk", 00:12:15.834 "block_size": 512, 00:12:15.834 "num_blocks": 65536, 00:12:15.834 "uuid": "9e2a0bf2-4a91-4f8e-a67d-aadac7f4f87a", 00:12:15.834 "assigned_rate_limits": { 00:12:15.834 "rw_ios_per_sec": 0, 00:12:15.834 "rw_mbytes_per_sec": 0, 00:12:15.834 "r_mbytes_per_sec": 0, 00:12:15.834 "w_mbytes_per_sec": 0 00:12:15.834 }, 00:12:15.834 "claimed": false, 00:12:15.834 "zoned": false, 00:12:15.834 "supported_io_types": { 00:12:15.834 "read": true, 00:12:15.834 "write": true, 00:12:15.834 "unmap": true, 00:12:15.834 "flush": true, 00:12:15.834 "reset": true, 00:12:15.834 "nvme_admin": false, 00:12:15.834 "nvme_io": false, 00:12:15.834 "nvme_io_md": false, 00:12:15.834 "write_zeroes": true, 00:12:15.834 "zcopy": true, 00:12:15.834 "get_zone_info": false, 00:12:15.834 "zone_management": false, 00:12:15.834 "zone_append": false, 00:12:15.834 "compare": false, 00:12:15.834 "compare_and_write": false, 00:12:15.834 "abort": true, 00:12:15.834 "seek_hole": false, 00:12:15.834 "seek_data": false, 00:12:15.834 "copy": true, 00:12:15.834 "nvme_iov_md": false 00:12:15.834 }, 00:12:15.834 "memory_domains": [ 00:12:15.834 { 00:12:15.834 "dma_device_id": "system", 00:12:15.834 "dma_device_type": 1 00:12:15.834 }, 00:12:15.834 { 00:12:15.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.834 "dma_device_type": 2 00:12:15.834 } 00:12:15.834 ], 00:12:15.834 "driver_specific": {} 00:12:15.834 } 00:12:15.834 ] 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.834 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.835 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:15.835 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.835 15:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:15.835 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.835 15:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.835 BaseBdev3 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.835 [ 00:12:15.835 { 00:12:15.835 "name": "BaseBdev3", 00:12:15.835 "aliases": [ 00:12:15.835 "b971ec28-7408-4423-86e1-32320359e42c" 00:12:15.835 ], 00:12:15.835 "product_name": "Malloc disk", 00:12:15.835 "block_size": 512, 00:12:15.835 "num_blocks": 65536, 00:12:15.835 "uuid": "b971ec28-7408-4423-86e1-32320359e42c", 00:12:15.835 "assigned_rate_limits": { 00:12:15.835 "rw_ios_per_sec": 0, 00:12:15.835 "rw_mbytes_per_sec": 0, 00:12:15.835 "r_mbytes_per_sec": 0, 00:12:15.835 "w_mbytes_per_sec": 0 00:12:15.835 }, 00:12:15.835 "claimed": false, 00:12:15.835 "zoned": false, 00:12:15.835 "supported_io_types": { 00:12:15.835 "read": true, 00:12:15.835 "write": true, 00:12:15.835 "unmap": true, 00:12:15.835 "flush": true, 00:12:15.835 "reset": true, 00:12:15.835 "nvme_admin": false, 00:12:15.835 "nvme_io": false, 00:12:15.835 "nvme_io_md": false, 00:12:15.835 "write_zeroes": true, 00:12:15.835 "zcopy": true, 00:12:15.835 "get_zone_info": false, 00:12:15.835 "zone_management": false, 00:12:15.835 "zone_append": false, 00:12:15.835 "compare": false, 00:12:15.835 "compare_and_write": false, 00:12:15.835 "abort": true, 00:12:15.835 "seek_hole": false, 00:12:15.835 "seek_data": false, 00:12:15.835 "copy": true, 00:12:15.835 "nvme_iov_md": false 00:12:15.835 }, 00:12:15.835 "memory_domains": [ 00:12:15.835 { 00:12:15.835 "dma_device_id": "system", 00:12:15.835 "dma_device_type": 1 00:12:15.835 }, 00:12:15.835 { 00:12:15.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.835 "dma_device_type": 2 00:12:15.835 } 00:12:15.835 ], 00:12:15.835 "driver_specific": {} 00:12:15.835 } 00:12:15.835 ] 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.835 [2024-12-06 15:38:59.050528] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:15.835 [2024-12-06 15:38:59.050581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:15.835 [2024-12-06 15:38:59.050606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.835 [2024-12-06 15:38:59.052916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.835 "name": "Existed_Raid", 00:12:15.835 "uuid": "3ca3c701-610d-4d6a-91e5-81ac0b6ab60d", 00:12:15.835 "strip_size_kb": 64, 00:12:15.835 "state": "configuring", 00:12:15.835 "raid_level": "concat", 00:12:15.835 "superblock": true, 00:12:15.835 "num_base_bdevs": 3, 00:12:15.835 "num_base_bdevs_discovered": 2, 00:12:15.835 "num_base_bdevs_operational": 3, 00:12:15.835 "base_bdevs_list": [ 00:12:15.835 { 00:12:15.835 "name": "BaseBdev1", 00:12:15.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.835 "is_configured": false, 00:12:15.835 "data_offset": 0, 00:12:15.835 "data_size": 0 00:12:15.835 }, 00:12:15.835 { 00:12:15.835 "name": "BaseBdev2", 00:12:15.835 "uuid": "9e2a0bf2-4a91-4f8e-a67d-aadac7f4f87a", 00:12:15.835 "is_configured": true, 00:12:15.835 "data_offset": 2048, 00:12:15.835 "data_size": 63488 00:12:15.835 }, 00:12:15.835 { 00:12:15.835 "name": "BaseBdev3", 00:12:15.835 "uuid": "b971ec28-7408-4423-86e1-32320359e42c", 00:12:15.835 "is_configured": true, 00:12:15.835 "data_offset": 2048, 00:12:15.835 "data_size": 63488 00:12:15.835 } 00:12:15.835 ] 00:12:15.835 }' 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.835 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.404 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:16.404 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.404 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.404 [2024-12-06 15:38:59.450380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:16.404 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.404 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:16.404 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.404 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.404 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:16.404 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.404 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.404 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.404 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.404 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.405 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.405 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.405 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.405 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.405 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.405 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.405 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.405 "name": "Existed_Raid", 00:12:16.405 "uuid": "3ca3c701-610d-4d6a-91e5-81ac0b6ab60d", 00:12:16.405 "strip_size_kb": 64, 00:12:16.405 "state": "configuring", 00:12:16.405 "raid_level": "concat", 00:12:16.405 "superblock": true, 00:12:16.405 "num_base_bdevs": 3, 00:12:16.405 "num_base_bdevs_discovered": 1, 00:12:16.405 "num_base_bdevs_operational": 3, 00:12:16.405 "base_bdevs_list": [ 00:12:16.405 { 00:12:16.405 "name": "BaseBdev1", 00:12:16.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.405 "is_configured": false, 00:12:16.405 "data_offset": 0, 00:12:16.405 "data_size": 0 00:12:16.405 }, 00:12:16.405 { 00:12:16.405 "name": null, 00:12:16.405 "uuid": "9e2a0bf2-4a91-4f8e-a67d-aadac7f4f87a", 00:12:16.405 "is_configured": false, 00:12:16.405 "data_offset": 0, 00:12:16.405 "data_size": 63488 00:12:16.405 }, 00:12:16.405 { 00:12:16.405 "name": "BaseBdev3", 00:12:16.405 "uuid": "b971ec28-7408-4423-86e1-32320359e42c", 00:12:16.405 "is_configured": true, 00:12:16.405 "data_offset": 2048, 00:12:16.405 "data_size": 63488 00:12:16.405 } 00:12:16.405 ] 00:12:16.405 }' 00:12:16.405 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.405 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.664 [2024-12-06 15:38:59.931253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.664 BaseBdev1 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.664 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.924 [ 00:12:16.924 { 00:12:16.924 "name": "BaseBdev1", 00:12:16.924 "aliases": [ 00:12:16.924 "2e511524-9e01-4d82-bd88-79e6b7313ea2" 00:12:16.924 ], 00:12:16.924 "product_name": "Malloc disk", 00:12:16.924 "block_size": 512, 00:12:16.924 "num_blocks": 65536, 00:12:16.924 "uuid": "2e511524-9e01-4d82-bd88-79e6b7313ea2", 00:12:16.924 "assigned_rate_limits": { 00:12:16.924 "rw_ios_per_sec": 0, 00:12:16.924 "rw_mbytes_per_sec": 0, 00:12:16.924 "r_mbytes_per_sec": 0, 00:12:16.924 "w_mbytes_per_sec": 0 00:12:16.924 }, 00:12:16.924 "claimed": true, 00:12:16.924 "claim_type": "exclusive_write", 00:12:16.924 "zoned": false, 00:12:16.924 "supported_io_types": { 00:12:16.924 "read": true, 00:12:16.924 "write": true, 00:12:16.924 "unmap": true, 00:12:16.924 "flush": true, 00:12:16.924 "reset": true, 00:12:16.924 "nvme_admin": false, 00:12:16.924 "nvme_io": false, 00:12:16.924 "nvme_io_md": false, 00:12:16.924 "write_zeroes": true, 00:12:16.924 "zcopy": true, 00:12:16.924 "get_zone_info": false, 00:12:16.924 "zone_management": false, 00:12:16.924 "zone_append": false, 00:12:16.924 "compare": false, 00:12:16.924 "compare_and_write": false, 00:12:16.924 "abort": true, 00:12:16.924 "seek_hole": false, 00:12:16.924 "seek_data": false, 00:12:16.924 "copy": true, 00:12:16.924 "nvme_iov_md": false 00:12:16.924 }, 00:12:16.924 "memory_domains": [ 00:12:16.924 { 00:12:16.924 "dma_device_id": "system", 00:12:16.924 "dma_device_type": 1 00:12:16.924 }, 00:12:16.924 { 00:12:16.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.924 "dma_device_type": 2 00:12:16.924 } 00:12:16.924 ], 00:12:16.924 "driver_specific": {} 00:12:16.924 } 00:12:16.924 ] 00:12:16.924 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.924 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:16.925 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:16.925 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.925 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.925 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:16.925 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.925 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.925 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.925 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.925 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.925 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.925 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.925 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.925 15:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.925 15:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.925 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.925 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.925 "name": "Existed_Raid", 00:12:16.925 "uuid": "3ca3c701-610d-4d6a-91e5-81ac0b6ab60d", 00:12:16.925 "strip_size_kb": 64, 00:12:16.925 "state": "configuring", 00:12:16.925 "raid_level": "concat", 00:12:16.925 "superblock": true, 00:12:16.925 "num_base_bdevs": 3, 00:12:16.925 "num_base_bdevs_discovered": 2, 00:12:16.925 "num_base_bdevs_operational": 3, 00:12:16.925 "base_bdevs_list": [ 00:12:16.925 { 00:12:16.925 "name": "BaseBdev1", 00:12:16.925 "uuid": "2e511524-9e01-4d82-bd88-79e6b7313ea2", 00:12:16.925 "is_configured": true, 00:12:16.925 "data_offset": 2048, 00:12:16.925 "data_size": 63488 00:12:16.925 }, 00:12:16.925 { 00:12:16.925 "name": null, 00:12:16.925 "uuid": "9e2a0bf2-4a91-4f8e-a67d-aadac7f4f87a", 00:12:16.925 "is_configured": false, 00:12:16.925 "data_offset": 0, 00:12:16.925 "data_size": 63488 00:12:16.925 }, 00:12:16.925 { 00:12:16.925 "name": "BaseBdev3", 00:12:16.925 "uuid": "b971ec28-7408-4423-86e1-32320359e42c", 00:12:16.925 "is_configured": true, 00:12:16.925 "data_offset": 2048, 00:12:16.925 "data_size": 63488 00:12:16.925 } 00:12:16.925 ] 00:12:16.925 }' 00:12:16.925 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.925 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.184 [2024-12-06 15:39:00.458650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.184 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.443 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.443 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.443 "name": "Existed_Raid", 00:12:17.443 "uuid": "3ca3c701-610d-4d6a-91e5-81ac0b6ab60d", 00:12:17.443 "strip_size_kb": 64, 00:12:17.443 "state": "configuring", 00:12:17.443 "raid_level": "concat", 00:12:17.443 "superblock": true, 00:12:17.443 "num_base_bdevs": 3, 00:12:17.443 "num_base_bdevs_discovered": 1, 00:12:17.443 "num_base_bdevs_operational": 3, 00:12:17.443 "base_bdevs_list": [ 00:12:17.443 { 00:12:17.443 "name": "BaseBdev1", 00:12:17.443 "uuid": "2e511524-9e01-4d82-bd88-79e6b7313ea2", 00:12:17.443 "is_configured": true, 00:12:17.443 "data_offset": 2048, 00:12:17.443 "data_size": 63488 00:12:17.443 }, 00:12:17.443 { 00:12:17.443 "name": null, 00:12:17.443 "uuid": "9e2a0bf2-4a91-4f8e-a67d-aadac7f4f87a", 00:12:17.443 "is_configured": false, 00:12:17.443 "data_offset": 0, 00:12:17.443 "data_size": 63488 00:12:17.443 }, 00:12:17.443 { 00:12:17.443 "name": null, 00:12:17.443 "uuid": "b971ec28-7408-4423-86e1-32320359e42c", 00:12:17.443 "is_configured": false, 00:12:17.443 "data_offset": 0, 00:12:17.443 "data_size": 63488 00:12:17.443 } 00:12:17.443 ] 00:12:17.443 }' 00:12:17.443 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.443 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.703 [2024-12-06 15:39:00.906319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.703 "name": "Existed_Raid", 00:12:17.703 "uuid": "3ca3c701-610d-4d6a-91e5-81ac0b6ab60d", 00:12:17.703 "strip_size_kb": 64, 00:12:17.703 "state": "configuring", 00:12:17.703 "raid_level": "concat", 00:12:17.703 "superblock": true, 00:12:17.703 "num_base_bdevs": 3, 00:12:17.703 "num_base_bdevs_discovered": 2, 00:12:17.703 "num_base_bdevs_operational": 3, 00:12:17.703 "base_bdevs_list": [ 00:12:17.703 { 00:12:17.703 "name": "BaseBdev1", 00:12:17.703 "uuid": "2e511524-9e01-4d82-bd88-79e6b7313ea2", 00:12:17.703 "is_configured": true, 00:12:17.703 "data_offset": 2048, 00:12:17.703 "data_size": 63488 00:12:17.703 }, 00:12:17.703 { 00:12:17.703 "name": null, 00:12:17.703 "uuid": "9e2a0bf2-4a91-4f8e-a67d-aadac7f4f87a", 00:12:17.703 "is_configured": false, 00:12:17.703 "data_offset": 0, 00:12:17.703 "data_size": 63488 00:12:17.703 }, 00:12:17.703 { 00:12:17.703 "name": "BaseBdev3", 00:12:17.703 "uuid": "b971ec28-7408-4423-86e1-32320359e42c", 00:12:17.703 "is_configured": true, 00:12:17.703 "data_offset": 2048, 00:12:17.703 "data_size": 63488 00:12:17.703 } 00:12:17.703 ] 00:12:17.703 }' 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.703 15:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.272 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.273 [2024-12-06 15:39:01.390409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.273 "name": "Existed_Raid", 00:12:18.273 "uuid": "3ca3c701-610d-4d6a-91e5-81ac0b6ab60d", 00:12:18.273 "strip_size_kb": 64, 00:12:18.273 "state": "configuring", 00:12:18.273 "raid_level": "concat", 00:12:18.273 "superblock": true, 00:12:18.273 "num_base_bdevs": 3, 00:12:18.273 "num_base_bdevs_discovered": 1, 00:12:18.273 "num_base_bdevs_operational": 3, 00:12:18.273 "base_bdevs_list": [ 00:12:18.273 { 00:12:18.273 "name": null, 00:12:18.273 "uuid": "2e511524-9e01-4d82-bd88-79e6b7313ea2", 00:12:18.273 "is_configured": false, 00:12:18.273 "data_offset": 0, 00:12:18.273 "data_size": 63488 00:12:18.273 }, 00:12:18.273 { 00:12:18.273 "name": null, 00:12:18.273 "uuid": "9e2a0bf2-4a91-4f8e-a67d-aadac7f4f87a", 00:12:18.273 "is_configured": false, 00:12:18.273 "data_offset": 0, 00:12:18.273 "data_size": 63488 00:12:18.273 }, 00:12:18.273 { 00:12:18.273 "name": "BaseBdev3", 00:12:18.273 "uuid": "b971ec28-7408-4423-86e1-32320359e42c", 00:12:18.273 "is_configured": true, 00:12:18.273 "data_offset": 2048, 00:12:18.273 "data_size": 63488 00:12:18.273 } 00:12:18.273 ] 00:12:18.273 }' 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.273 15:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.846 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.846 15:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.846 15:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.846 15:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:18.846 15:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.846 [2024-12-06 15:39:02.014335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.846 "name": "Existed_Raid", 00:12:18.846 "uuid": "3ca3c701-610d-4d6a-91e5-81ac0b6ab60d", 00:12:18.846 "strip_size_kb": 64, 00:12:18.846 "state": "configuring", 00:12:18.846 "raid_level": "concat", 00:12:18.846 "superblock": true, 00:12:18.846 "num_base_bdevs": 3, 00:12:18.846 "num_base_bdevs_discovered": 2, 00:12:18.846 "num_base_bdevs_operational": 3, 00:12:18.846 "base_bdevs_list": [ 00:12:18.846 { 00:12:18.846 "name": null, 00:12:18.846 "uuid": "2e511524-9e01-4d82-bd88-79e6b7313ea2", 00:12:18.846 "is_configured": false, 00:12:18.846 "data_offset": 0, 00:12:18.846 "data_size": 63488 00:12:18.846 }, 00:12:18.846 { 00:12:18.846 "name": "BaseBdev2", 00:12:18.846 "uuid": "9e2a0bf2-4a91-4f8e-a67d-aadac7f4f87a", 00:12:18.846 "is_configured": true, 00:12:18.846 "data_offset": 2048, 00:12:18.846 "data_size": 63488 00:12:18.846 }, 00:12:18.846 { 00:12:18.846 "name": "BaseBdev3", 00:12:18.846 "uuid": "b971ec28-7408-4423-86e1-32320359e42c", 00:12:18.846 "is_configured": true, 00:12:18.846 "data_offset": 2048, 00:12:18.846 "data_size": 63488 00:12:18.846 } 00:12:18.846 ] 00:12:18.846 }' 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.846 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2e511524-9e01-4d82-bd88-79e6b7313ea2 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.416 [2024-12-06 15:39:02.583093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:19.416 [2024-12-06 15:39:02.583421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:19.416 [2024-12-06 15:39:02.583443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:19.416 [2024-12-06 15:39:02.583784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:19.416 [2024-12-06 15:39:02.583952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:19.416 [2024-12-06 15:39:02.583969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:19.416 [2024-12-06 15:39:02.584127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.416 NewBaseBdev 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.416 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.416 [ 00:12:19.416 { 00:12:19.416 "name": "NewBaseBdev", 00:12:19.416 "aliases": [ 00:12:19.416 "2e511524-9e01-4d82-bd88-79e6b7313ea2" 00:12:19.416 ], 00:12:19.416 "product_name": "Malloc disk", 00:12:19.416 "block_size": 512, 00:12:19.417 "num_blocks": 65536, 00:12:19.417 "uuid": "2e511524-9e01-4d82-bd88-79e6b7313ea2", 00:12:19.417 "assigned_rate_limits": { 00:12:19.417 "rw_ios_per_sec": 0, 00:12:19.417 "rw_mbytes_per_sec": 0, 00:12:19.417 "r_mbytes_per_sec": 0, 00:12:19.417 "w_mbytes_per_sec": 0 00:12:19.417 }, 00:12:19.417 "claimed": true, 00:12:19.417 "claim_type": "exclusive_write", 00:12:19.417 "zoned": false, 00:12:19.417 "supported_io_types": { 00:12:19.417 "read": true, 00:12:19.417 "write": true, 00:12:19.417 "unmap": true, 00:12:19.417 "flush": true, 00:12:19.417 "reset": true, 00:12:19.417 "nvme_admin": false, 00:12:19.417 "nvme_io": false, 00:12:19.417 "nvme_io_md": false, 00:12:19.417 "write_zeroes": true, 00:12:19.417 "zcopy": true, 00:12:19.417 "get_zone_info": false, 00:12:19.417 "zone_management": false, 00:12:19.417 "zone_append": false, 00:12:19.417 "compare": false, 00:12:19.417 "compare_and_write": false, 00:12:19.417 "abort": true, 00:12:19.417 "seek_hole": false, 00:12:19.417 "seek_data": false, 00:12:19.417 "copy": true, 00:12:19.417 "nvme_iov_md": false 00:12:19.417 }, 00:12:19.417 "memory_domains": [ 00:12:19.417 { 00:12:19.417 "dma_device_id": "system", 00:12:19.417 "dma_device_type": 1 00:12:19.417 }, 00:12:19.417 { 00:12:19.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.417 "dma_device_type": 2 00:12:19.417 } 00:12:19.417 ], 00:12:19.417 "driver_specific": {} 00:12:19.417 } 00:12:19.417 ] 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.417 "name": "Existed_Raid", 00:12:19.417 "uuid": "3ca3c701-610d-4d6a-91e5-81ac0b6ab60d", 00:12:19.417 "strip_size_kb": 64, 00:12:19.417 "state": "online", 00:12:19.417 "raid_level": "concat", 00:12:19.417 "superblock": true, 00:12:19.417 "num_base_bdevs": 3, 00:12:19.417 "num_base_bdevs_discovered": 3, 00:12:19.417 "num_base_bdevs_operational": 3, 00:12:19.417 "base_bdevs_list": [ 00:12:19.417 { 00:12:19.417 "name": "NewBaseBdev", 00:12:19.417 "uuid": "2e511524-9e01-4d82-bd88-79e6b7313ea2", 00:12:19.417 "is_configured": true, 00:12:19.417 "data_offset": 2048, 00:12:19.417 "data_size": 63488 00:12:19.417 }, 00:12:19.417 { 00:12:19.417 "name": "BaseBdev2", 00:12:19.417 "uuid": "9e2a0bf2-4a91-4f8e-a67d-aadac7f4f87a", 00:12:19.417 "is_configured": true, 00:12:19.417 "data_offset": 2048, 00:12:19.417 "data_size": 63488 00:12:19.417 }, 00:12:19.417 { 00:12:19.417 "name": "BaseBdev3", 00:12:19.417 "uuid": "b971ec28-7408-4423-86e1-32320359e42c", 00:12:19.417 "is_configured": true, 00:12:19.417 "data_offset": 2048, 00:12:19.417 "data_size": 63488 00:12:19.417 } 00:12:19.417 ] 00:12:19.417 }' 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.417 15:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.987 [2024-12-06 15:39:03.046893] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:19.987 "name": "Existed_Raid", 00:12:19.987 "aliases": [ 00:12:19.987 "3ca3c701-610d-4d6a-91e5-81ac0b6ab60d" 00:12:19.987 ], 00:12:19.987 "product_name": "Raid Volume", 00:12:19.987 "block_size": 512, 00:12:19.987 "num_blocks": 190464, 00:12:19.987 "uuid": "3ca3c701-610d-4d6a-91e5-81ac0b6ab60d", 00:12:19.987 "assigned_rate_limits": { 00:12:19.987 "rw_ios_per_sec": 0, 00:12:19.987 "rw_mbytes_per_sec": 0, 00:12:19.987 "r_mbytes_per_sec": 0, 00:12:19.987 "w_mbytes_per_sec": 0 00:12:19.987 }, 00:12:19.987 "claimed": false, 00:12:19.987 "zoned": false, 00:12:19.987 "supported_io_types": { 00:12:19.987 "read": true, 00:12:19.987 "write": true, 00:12:19.987 "unmap": true, 00:12:19.987 "flush": true, 00:12:19.987 "reset": true, 00:12:19.987 "nvme_admin": false, 00:12:19.987 "nvme_io": false, 00:12:19.987 "nvme_io_md": false, 00:12:19.987 "write_zeroes": true, 00:12:19.987 "zcopy": false, 00:12:19.987 "get_zone_info": false, 00:12:19.987 "zone_management": false, 00:12:19.987 "zone_append": false, 00:12:19.987 "compare": false, 00:12:19.987 "compare_and_write": false, 00:12:19.987 "abort": false, 00:12:19.987 "seek_hole": false, 00:12:19.987 "seek_data": false, 00:12:19.987 "copy": false, 00:12:19.987 "nvme_iov_md": false 00:12:19.987 }, 00:12:19.987 "memory_domains": [ 00:12:19.987 { 00:12:19.987 "dma_device_id": "system", 00:12:19.987 "dma_device_type": 1 00:12:19.987 }, 00:12:19.987 { 00:12:19.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.987 "dma_device_type": 2 00:12:19.987 }, 00:12:19.987 { 00:12:19.987 "dma_device_id": "system", 00:12:19.987 "dma_device_type": 1 00:12:19.987 }, 00:12:19.987 { 00:12:19.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.987 "dma_device_type": 2 00:12:19.987 }, 00:12:19.987 { 00:12:19.987 "dma_device_id": "system", 00:12:19.987 "dma_device_type": 1 00:12:19.987 }, 00:12:19.987 { 00:12:19.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.987 "dma_device_type": 2 00:12:19.987 } 00:12:19.987 ], 00:12:19.987 "driver_specific": { 00:12:19.987 "raid": { 00:12:19.987 "uuid": "3ca3c701-610d-4d6a-91e5-81ac0b6ab60d", 00:12:19.987 "strip_size_kb": 64, 00:12:19.987 "state": "online", 00:12:19.987 "raid_level": "concat", 00:12:19.987 "superblock": true, 00:12:19.987 "num_base_bdevs": 3, 00:12:19.987 "num_base_bdevs_discovered": 3, 00:12:19.987 "num_base_bdevs_operational": 3, 00:12:19.987 "base_bdevs_list": [ 00:12:19.987 { 00:12:19.987 "name": "NewBaseBdev", 00:12:19.987 "uuid": "2e511524-9e01-4d82-bd88-79e6b7313ea2", 00:12:19.987 "is_configured": true, 00:12:19.987 "data_offset": 2048, 00:12:19.987 "data_size": 63488 00:12:19.987 }, 00:12:19.987 { 00:12:19.987 "name": "BaseBdev2", 00:12:19.987 "uuid": "9e2a0bf2-4a91-4f8e-a67d-aadac7f4f87a", 00:12:19.987 "is_configured": true, 00:12:19.987 "data_offset": 2048, 00:12:19.987 "data_size": 63488 00:12:19.987 }, 00:12:19.987 { 00:12:19.987 "name": "BaseBdev3", 00:12:19.987 "uuid": "b971ec28-7408-4423-86e1-32320359e42c", 00:12:19.987 "is_configured": true, 00:12:19.987 "data_offset": 2048, 00:12:19.987 "data_size": 63488 00:12:19.987 } 00:12:19.987 ] 00:12:19.987 } 00:12:19.987 } 00:12:19.987 }' 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:19.987 BaseBdev2 00:12:19.987 BaseBdev3' 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.987 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.247 [2024-12-06 15:39:03.314346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.247 [2024-12-06 15:39:03.314388] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.247 [2024-12-06 15:39:03.314525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.247 [2024-12-06 15:39:03.314602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.247 [2024-12-06 15:39:03.314619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66263 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66263 ']' 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66263 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66263 00:12:20.247 killing process with pid 66263 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66263' 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66263 00:12:20.247 [2024-12-06 15:39:03.362403] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.247 15:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66263 00:12:20.507 [2024-12-06 15:39:03.704334] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.886 ************************************ 00:12:21.886 END TEST raid_state_function_test_sb 00:12:21.886 ************************************ 00:12:21.886 15:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:21.886 00:12:21.886 real 0m10.541s 00:12:21.886 user 0m16.298s 00:12:21.886 sys 0m2.359s 00:12:21.886 15:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.886 15:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.886 15:39:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:12:21.886 15:39:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:21.886 15:39:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.886 15:39:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.886 ************************************ 00:12:21.886 START TEST raid_superblock_test 00:12:21.886 ************************************ 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66878 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66878 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66878 ']' 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.886 15:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.887 15:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.887 15:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.887 15:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.887 [2024-12-06 15:39:05.144731] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:12:21.887 [2024-12-06 15:39:05.144888] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66878 ] 00:12:22.164 [2024-12-06 15:39:05.321203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.423 [2024-12-06 15:39:05.466630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.423 [2024-12-06 15:39:05.709575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.423 [2024-12-06 15:39:05.709656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.993 15:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.993 15:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:22.993 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:22.993 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:22.993 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:22.993 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:22.993 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:22.993 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:22.993 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:22.993 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:22.993 15:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:22.993 15:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.993 15:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.993 malloc1 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.993 [2024-12-06 15:39:06.041987] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:22.993 [2024-12-06 15:39:06.042063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.993 [2024-12-06 15:39:06.042091] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:22.993 [2024-12-06 15:39:06.042104] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.993 [2024-12-06 15:39:06.044868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.993 [2024-12-06 15:39:06.044913] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:22.993 pt1 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.993 malloc2 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.993 [2024-12-06 15:39:06.104628] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:22.993 [2024-12-06 15:39:06.104807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.993 [2024-12-06 15:39:06.104850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:22.993 [2024-12-06 15:39:06.104863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.993 [2024-12-06 15:39:06.107592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.993 [2024-12-06 15:39:06.107632] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:22.993 pt2 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.993 malloc3 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.993 [2024-12-06 15:39:06.178484] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:22.993 [2024-12-06 15:39:06.178677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.993 [2024-12-06 15:39:06.178744] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:22.993 [2024-12-06 15:39:06.178828] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.993 [2024-12-06 15:39:06.181801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.993 [2024-12-06 15:39:06.181934] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:22.993 pt3 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.993 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.993 [2024-12-06 15:39:06.190648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:22.993 [2024-12-06 15:39:06.193235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:22.993 [2024-12-06 15:39:06.193311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:22.993 [2024-12-06 15:39:06.193491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:22.993 [2024-12-06 15:39:06.193529] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:22.993 [2024-12-06 15:39:06.193818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:22.993 [2024-12-06 15:39:06.193998] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:22.993 [2024-12-06 15:39:06.194009] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:22.993 [2024-12-06 15:39:06.194184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.994 "name": "raid_bdev1", 00:12:22.994 "uuid": "40a47543-475e-46da-ab84-e4d9afe43151", 00:12:22.994 "strip_size_kb": 64, 00:12:22.994 "state": "online", 00:12:22.994 "raid_level": "concat", 00:12:22.994 "superblock": true, 00:12:22.994 "num_base_bdevs": 3, 00:12:22.994 "num_base_bdevs_discovered": 3, 00:12:22.994 "num_base_bdevs_operational": 3, 00:12:22.994 "base_bdevs_list": [ 00:12:22.994 { 00:12:22.994 "name": "pt1", 00:12:22.994 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.994 "is_configured": true, 00:12:22.994 "data_offset": 2048, 00:12:22.994 "data_size": 63488 00:12:22.994 }, 00:12:22.994 { 00:12:22.994 "name": "pt2", 00:12:22.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.994 "is_configured": true, 00:12:22.994 "data_offset": 2048, 00:12:22.994 "data_size": 63488 00:12:22.994 }, 00:12:22.994 { 00:12:22.994 "name": "pt3", 00:12:22.994 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.994 "is_configured": true, 00:12:22.994 "data_offset": 2048, 00:12:22.994 "data_size": 63488 00:12:22.994 } 00:12:22.994 ] 00:12:22.994 }' 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.994 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.564 [2024-12-06 15:39:06.630698] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:23.564 "name": "raid_bdev1", 00:12:23.564 "aliases": [ 00:12:23.564 "40a47543-475e-46da-ab84-e4d9afe43151" 00:12:23.564 ], 00:12:23.564 "product_name": "Raid Volume", 00:12:23.564 "block_size": 512, 00:12:23.564 "num_blocks": 190464, 00:12:23.564 "uuid": "40a47543-475e-46da-ab84-e4d9afe43151", 00:12:23.564 "assigned_rate_limits": { 00:12:23.564 "rw_ios_per_sec": 0, 00:12:23.564 "rw_mbytes_per_sec": 0, 00:12:23.564 "r_mbytes_per_sec": 0, 00:12:23.564 "w_mbytes_per_sec": 0 00:12:23.564 }, 00:12:23.564 "claimed": false, 00:12:23.564 "zoned": false, 00:12:23.564 "supported_io_types": { 00:12:23.564 "read": true, 00:12:23.564 "write": true, 00:12:23.564 "unmap": true, 00:12:23.564 "flush": true, 00:12:23.564 "reset": true, 00:12:23.564 "nvme_admin": false, 00:12:23.564 "nvme_io": false, 00:12:23.564 "nvme_io_md": false, 00:12:23.564 "write_zeroes": true, 00:12:23.564 "zcopy": false, 00:12:23.564 "get_zone_info": false, 00:12:23.564 "zone_management": false, 00:12:23.564 "zone_append": false, 00:12:23.564 "compare": false, 00:12:23.564 "compare_and_write": false, 00:12:23.564 "abort": false, 00:12:23.564 "seek_hole": false, 00:12:23.564 "seek_data": false, 00:12:23.564 "copy": false, 00:12:23.564 "nvme_iov_md": false 00:12:23.564 }, 00:12:23.564 "memory_domains": [ 00:12:23.564 { 00:12:23.564 "dma_device_id": "system", 00:12:23.564 "dma_device_type": 1 00:12:23.564 }, 00:12:23.564 { 00:12:23.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.564 "dma_device_type": 2 00:12:23.564 }, 00:12:23.564 { 00:12:23.564 "dma_device_id": "system", 00:12:23.564 "dma_device_type": 1 00:12:23.564 }, 00:12:23.564 { 00:12:23.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.564 "dma_device_type": 2 00:12:23.564 }, 00:12:23.564 { 00:12:23.564 "dma_device_id": "system", 00:12:23.564 "dma_device_type": 1 00:12:23.564 }, 00:12:23.564 { 00:12:23.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.564 "dma_device_type": 2 00:12:23.564 } 00:12:23.564 ], 00:12:23.564 "driver_specific": { 00:12:23.564 "raid": { 00:12:23.564 "uuid": "40a47543-475e-46da-ab84-e4d9afe43151", 00:12:23.564 "strip_size_kb": 64, 00:12:23.564 "state": "online", 00:12:23.564 "raid_level": "concat", 00:12:23.564 "superblock": true, 00:12:23.564 "num_base_bdevs": 3, 00:12:23.564 "num_base_bdevs_discovered": 3, 00:12:23.564 "num_base_bdevs_operational": 3, 00:12:23.564 "base_bdevs_list": [ 00:12:23.564 { 00:12:23.564 "name": "pt1", 00:12:23.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:23.564 "is_configured": true, 00:12:23.564 "data_offset": 2048, 00:12:23.564 "data_size": 63488 00:12:23.564 }, 00:12:23.564 { 00:12:23.564 "name": "pt2", 00:12:23.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.564 "is_configured": true, 00:12:23.564 "data_offset": 2048, 00:12:23.564 "data_size": 63488 00:12:23.564 }, 00:12:23.564 { 00:12:23.564 "name": "pt3", 00:12:23.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.564 "is_configured": true, 00:12:23.564 "data_offset": 2048, 00:12:23.564 "data_size": 63488 00:12:23.564 } 00:12:23.564 ] 00:12:23.564 } 00:12:23.564 } 00:12:23.564 }' 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:23.564 pt2 00:12:23.564 pt3' 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.564 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:23.836 [2024-12-06 15:39:06.894601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=40a47543-475e-46da-ab84-e4d9afe43151 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 40a47543-475e-46da-ab84-e4d9afe43151 ']' 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.836 [2024-12-06 15:39:06.942310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.836 [2024-12-06 15:39:06.942470] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.836 [2024-12-06 15:39:06.942612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.836 [2024-12-06 15:39:06.942694] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.836 [2024-12-06 15:39:06.942708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.836 15:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.836 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.836 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:23.836 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:23.836 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.836 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.837 [2024-12-06 15:39:07.090362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:23.837 [2024-12-06 15:39:07.092905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:23.837 [2024-12-06 15:39:07.092969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:23.837 [2024-12-06 15:39:07.093037] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:23.837 [2024-12-06 15:39:07.093113] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:23.837 [2024-12-06 15:39:07.093137] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:23.837 [2024-12-06 15:39:07.093160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.837 [2024-12-06 15:39:07.093172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:23.837 request: 00:12:23.837 { 00:12:23.837 "name": "raid_bdev1", 00:12:23.837 "raid_level": "concat", 00:12:23.837 "base_bdevs": [ 00:12:23.837 "malloc1", 00:12:23.837 "malloc2", 00:12:23.837 "malloc3" 00:12:23.837 ], 00:12:23.837 "strip_size_kb": 64, 00:12:23.837 "superblock": false, 00:12:23.837 "method": "bdev_raid_create", 00:12:23.837 "req_id": 1 00:12:23.837 } 00:12:23.837 Got JSON-RPC error response 00:12:23.837 response: 00:12:23.837 { 00:12:23.837 "code": -17, 00:12:23.837 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:23.837 } 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.837 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.170 [2024-12-06 15:39:07.162179] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:24.170 [2024-12-06 15:39:07.162272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.170 [2024-12-06 15:39:07.162301] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:24.170 [2024-12-06 15:39:07.162313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.170 [2024-12-06 15:39:07.165255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.170 [2024-12-06 15:39:07.165300] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:24.170 [2024-12-06 15:39:07.165424] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:24.170 [2024-12-06 15:39:07.165496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:24.170 pt1 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.170 "name": "raid_bdev1", 00:12:24.170 "uuid": "40a47543-475e-46da-ab84-e4d9afe43151", 00:12:24.170 "strip_size_kb": 64, 00:12:24.170 "state": "configuring", 00:12:24.170 "raid_level": "concat", 00:12:24.170 "superblock": true, 00:12:24.170 "num_base_bdevs": 3, 00:12:24.170 "num_base_bdevs_discovered": 1, 00:12:24.170 "num_base_bdevs_operational": 3, 00:12:24.170 "base_bdevs_list": [ 00:12:24.170 { 00:12:24.170 "name": "pt1", 00:12:24.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:24.170 "is_configured": true, 00:12:24.170 "data_offset": 2048, 00:12:24.170 "data_size": 63488 00:12:24.170 }, 00:12:24.170 { 00:12:24.170 "name": null, 00:12:24.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.170 "is_configured": false, 00:12:24.170 "data_offset": 2048, 00:12:24.170 "data_size": 63488 00:12:24.170 }, 00:12:24.170 { 00:12:24.170 "name": null, 00:12:24.170 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.170 "is_configured": false, 00:12:24.170 "data_offset": 2048, 00:12:24.170 "data_size": 63488 00:12:24.170 } 00:12:24.170 ] 00:12:24.170 }' 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.170 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.429 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:24.429 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:24.429 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.429 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.430 [2024-12-06 15:39:07.573708] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:24.430 [2024-12-06 15:39:07.573800] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.430 [2024-12-06 15:39:07.573838] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:24.430 [2024-12-06 15:39:07.573851] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.430 [2024-12-06 15:39:07.574413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.430 [2024-12-06 15:39:07.574446] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:24.430 [2024-12-06 15:39:07.574575] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:24.430 [2024-12-06 15:39:07.574619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:24.430 pt2 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.430 [2024-12-06 15:39:07.581686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.430 "name": "raid_bdev1", 00:12:24.430 "uuid": "40a47543-475e-46da-ab84-e4d9afe43151", 00:12:24.430 "strip_size_kb": 64, 00:12:24.430 "state": "configuring", 00:12:24.430 "raid_level": "concat", 00:12:24.430 "superblock": true, 00:12:24.430 "num_base_bdevs": 3, 00:12:24.430 "num_base_bdevs_discovered": 1, 00:12:24.430 "num_base_bdevs_operational": 3, 00:12:24.430 "base_bdevs_list": [ 00:12:24.430 { 00:12:24.430 "name": "pt1", 00:12:24.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:24.430 "is_configured": true, 00:12:24.430 "data_offset": 2048, 00:12:24.430 "data_size": 63488 00:12:24.430 }, 00:12:24.430 { 00:12:24.430 "name": null, 00:12:24.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.430 "is_configured": false, 00:12:24.430 "data_offset": 0, 00:12:24.430 "data_size": 63488 00:12:24.430 }, 00:12:24.430 { 00:12:24.430 "name": null, 00:12:24.430 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.430 "is_configured": false, 00:12:24.430 "data_offset": 2048, 00:12:24.430 "data_size": 63488 00:12:24.430 } 00:12:24.430 ] 00:12:24.430 }' 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.430 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.000 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:25.000 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:25.000 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:25.000 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.000 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.000 [2024-12-06 15:39:07.993682] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:25.000 [2024-12-06 15:39:07.993778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.000 [2024-12-06 15:39:07.993805] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:25.000 [2024-12-06 15:39:07.993821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.000 [2024-12-06 15:39:07.994437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.000 [2024-12-06 15:39:07.994465] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:25.000 [2024-12-06 15:39:07.994602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:25.000 [2024-12-06 15:39:07.994638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:25.000 pt2 00:12:25.000 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.000 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:25.000 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:25.000 15:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:25.000 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.000 15:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.000 [2024-12-06 15:39:08.005640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:25.000 [2024-12-06 15:39:08.005702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.000 [2024-12-06 15:39:08.005721] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:25.000 [2024-12-06 15:39:08.005735] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.000 [2024-12-06 15:39:08.006190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.000 [2024-12-06 15:39:08.006217] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:25.000 [2024-12-06 15:39:08.006291] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:25.000 [2024-12-06 15:39:08.006318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:25.000 [2024-12-06 15:39:08.006451] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:25.000 [2024-12-06 15:39:08.006466] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:25.000 [2024-12-06 15:39:08.006775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:25.000 [2024-12-06 15:39:08.006952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:25.000 [2024-12-06 15:39:08.006967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:25.000 [2024-12-06 15:39:08.007115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.000 pt3 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.000 "name": "raid_bdev1", 00:12:25.000 "uuid": "40a47543-475e-46da-ab84-e4d9afe43151", 00:12:25.000 "strip_size_kb": 64, 00:12:25.000 "state": "online", 00:12:25.000 "raid_level": "concat", 00:12:25.000 "superblock": true, 00:12:25.000 "num_base_bdevs": 3, 00:12:25.000 "num_base_bdevs_discovered": 3, 00:12:25.000 "num_base_bdevs_operational": 3, 00:12:25.000 "base_bdevs_list": [ 00:12:25.000 { 00:12:25.000 "name": "pt1", 00:12:25.000 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.000 "is_configured": true, 00:12:25.000 "data_offset": 2048, 00:12:25.000 "data_size": 63488 00:12:25.000 }, 00:12:25.000 { 00:12:25.000 "name": "pt2", 00:12:25.000 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.000 "is_configured": true, 00:12:25.000 "data_offset": 2048, 00:12:25.000 "data_size": 63488 00:12:25.000 }, 00:12:25.000 { 00:12:25.000 "name": "pt3", 00:12:25.000 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.000 "is_configured": true, 00:12:25.000 "data_offset": 2048, 00:12:25.000 "data_size": 63488 00:12:25.000 } 00:12:25.000 ] 00:12:25.000 }' 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.000 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:25.260 [2024-12-06 15:39:08.393755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:25.260 "name": "raid_bdev1", 00:12:25.260 "aliases": [ 00:12:25.260 "40a47543-475e-46da-ab84-e4d9afe43151" 00:12:25.260 ], 00:12:25.260 "product_name": "Raid Volume", 00:12:25.260 "block_size": 512, 00:12:25.260 "num_blocks": 190464, 00:12:25.260 "uuid": "40a47543-475e-46da-ab84-e4d9afe43151", 00:12:25.260 "assigned_rate_limits": { 00:12:25.260 "rw_ios_per_sec": 0, 00:12:25.260 "rw_mbytes_per_sec": 0, 00:12:25.260 "r_mbytes_per_sec": 0, 00:12:25.260 "w_mbytes_per_sec": 0 00:12:25.260 }, 00:12:25.260 "claimed": false, 00:12:25.260 "zoned": false, 00:12:25.260 "supported_io_types": { 00:12:25.260 "read": true, 00:12:25.260 "write": true, 00:12:25.260 "unmap": true, 00:12:25.260 "flush": true, 00:12:25.260 "reset": true, 00:12:25.260 "nvme_admin": false, 00:12:25.260 "nvme_io": false, 00:12:25.260 "nvme_io_md": false, 00:12:25.260 "write_zeroes": true, 00:12:25.260 "zcopy": false, 00:12:25.260 "get_zone_info": false, 00:12:25.260 "zone_management": false, 00:12:25.260 "zone_append": false, 00:12:25.260 "compare": false, 00:12:25.260 "compare_and_write": false, 00:12:25.260 "abort": false, 00:12:25.260 "seek_hole": false, 00:12:25.260 "seek_data": false, 00:12:25.260 "copy": false, 00:12:25.260 "nvme_iov_md": false 00:12:25.260 }, 00:12:25.260 "memory_domains": [ 00:12:25.260 { 00:12:25.260 "dma_device_id": "system", 00:12:25.260 "dma_device_type": 1 00:12:25.260 }, 00:12:25.260 { 00:12:25.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.260 "dma_device_type": 2 00:12:25.260 }, 00:12:25.260 { 00:12:25.260 "dma_device_id": "system", 00:12:25.260 "dma_device_type": 1 00:12:25.260 }, 00:12:25.260 { 00:12:25.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.260 "dma_device_type": 2 00:12:25.260 }, 00:12:25.260 { 00:12:25.260 "dma_device_id": "system", 00:12:25.260 "dma_device_type": 1 00:12:25.260 }, 00:12:25.260 { 00:12:25.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.260 "dma_device_type": 2 00:12:25.260 } 00:12:25.260 ], 00:12:25.260 "driver_specific": { 00:12:25.260 "raid": { 00:12:25.260 "uuid": "40a47543-475e-46da-ab84-e4d9afe43151", 00:12:25.260 "strip_size_kb": 64, 00:12:25.260 "state": "online", 00:12:25.260 "raid_level": "concat", 00:12:25.260 "superblock": true, 00:12:25.260 "num_base_bdevs": 3, 00:12:25.260 "num_base_bdevs_discovered": 3, 00:12:25.260 "num_base_bdevs_operational": 3, 00:12:25.260 "base_bdevs_list": [ 00:12:25.260 { 00:12:25.260 "name": "pt1", 00:12:25.260 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.260 "is_configured": true, 00:12:25.260 "data_offset": 2048, 00:12:25.260 "data_size": 63488 00:12:25.260 }, 00:12:25.260 { 00:12:25.260 "name": "pt2", 00:12:25.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.260 "is_configured": true, 00:12:25.260 "data_offset": 2048, 00:12:25.260 "data_size": 63488 00:12:25.260 }, 00:12:25.260 { 00:12:25.260 "name": "pt3", 00:12:25.260 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.260 "is_configured": true, 00:12:25.260 "data_offset": 2048, 00:12:25.260 "data_size": 63488 00:12:25.260 } 00:12:25.260 ] 00:12:25.260 } 00:12:25.260 } 00:12:25.260 }' 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:25.260 pt2 00:12:25.260 pt3' 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.260 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:25.520 [2024-12-06 15:39:08.629274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 40a47543-475e-46da-ab84-e4d9afe43151 '!=' 40a47543-475e-46da-ab84-e4d9afe43151 ']' 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66878 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66878 ']' 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66878 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66878 00:12:25.520 killing process with pid 66878 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66878' 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66878 00:12:25.520 [2024-12-06 15:39:08.711054] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:25.520 15:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66878 00:12:25.520 [2024-12-06 15:39:08.711172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.520 [2024-12-06 15:39:08.711261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.520 [2024-12-06 15:39:08.711277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:25.779 [2024-12-06 15:39:09.046608] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:27.158 15:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:27.158 00:12:27.158 real 0m5.270s 00:12:27.158 user 0m7.247s 00:12:27.158 sys 0m1.154s 00:12:27.158 15:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.158 15:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.158 ************************************ 00:12:27.158 END TEST raid_superblock_test 00:12:27.158 ************************************ 00:12:27.158 15:39:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:12:27.158 15:39:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:27.158 15:39:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.158 15:39:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:27.158 ************************************ 00:12:27.158 START TEST raid_read_error_test 00:12:27.158 ************************************ 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kTFyoFevdH 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67131 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67131 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67131 ']' 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.158 15:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.416 [2024-12-06 15:39:10.508872] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:12:27.416 [2024-12-06 15:39:10.509016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67131 ] 00:12:27.416 [2024-12-06 15:39:10.698013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.675 [2024-12-06 15:39:10.844424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.943 [2024-12-06 15:39:11.090249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.943 [2024-12-06 15:39:11.090338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.200 BaseBdev1_malloc 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.200 true 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.200 [2024-12-06 15:39:11.415231] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:28.200 [2024-12-06 15:39:11.415454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.200 [2024-12-06 15:39:11.415495] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:28.200 [2024-12-06 15:39:11.415537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.200 [2024-12-06 15:39:11.418330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.200 [2024-12-06 15:39:11.418377] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:28.200 BaseBdev1 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.200 BaseBdev2_malloc 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.200 true 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.200 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.200 [2024-12-06 15:39:11.489630] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:28.200 [2024-12-06 15:39:11.489860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.200 [2024-12-06 15:39:11.489897] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:28.200 [2024-12-06 15:39:11.489921] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.200 [2024-12-06 15:39:11.492778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.200 [2024-12-06 15:39:11.492824] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:28.456 BaseBdev2 00:12:28.456 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.457 BaseBdev3_malloc 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.457 true 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.457 [2024-12-06 15:39:11.579257] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:28.457 [2024-12-06 15:39:11.579315] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.457 [2024-12-06 15:39:11.579335] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:28.457 [2024-12-06 15:39:11.579350] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.457 [2024-12-06 15:39:11.582067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.457 [2024-12-06 15:39:11.582113] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:28.457 BaseBdev3 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.457 [2024-12-06 15:39:11.591344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:28.457 [2024-12-06 15:39:11.593769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:28.457 [2024-12-06 15:39:11.593843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:28.457 [2024-12-06 15:39:11.594052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:28.457 [2024-12-06 15:39:11.594065] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:28.457 [2024-12-06 15:39:11.594347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:28.457 [2024-12-06 15:39:11.594538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:28.457 [2024-12-06 15:39:11.594558] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:28.457 [2024-12-06 15:39:11.594700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.457 "name": "raid_bdev1", 00:12:28.457 "uuid": "4b778bd8-327a-4012-87dd-1e7a71e907a7", 00:12:28.457 "strip_size_kb": 64, 00:12:28.457 "state": "online", 00:12:28.457 "raid_level": "concat", 00:12:28.457 "superblock": true, 00:12:28.457 "num_base_bdevs": 3, 00:12:28.457 "num_base_bdevs_discovered": 3, 00:12:28.457 "num_base_bdevs_operational": 3, 00:12:28.457 "base_bdevs_list": [ 00:12:28.457 { 00:12:28.457 "name": "BaseBdev1", 00:12:28.457 "uuid": "d115ab69-d97b-5a6c-9594-e76e3192c2ef", 00:12:28.457 "is_configured": true, 00:12:28.457 "data_offset": 2048, 00:12:28.457 "data_size": 63488 00:12:28.457 }, 00:12:28.457 { 00:12:28.457 "name": "BaseBdev2", 00:12:28.457 "uuid": "2036f4b3-27f7-5a5e-92ff-6373da5afe5a", 00:12:28.457 "is_configured": true, 00:12:28.457 "data_offset": 2048, 00:12:28.457 "data_size": 63488 00:12:28.457 }, 00:12:28.457 { 00:12:28.457 "name": "BaseBdev3", 00:12:28.457 "uuid": "71d37cd5-306c-5e81-bc9c-c2197696f5fd", 00:12:28.457 "is_configured": true, 00:12:28.457 "data_offset": 2048, 00:12:28.457 "data_size": 63488 00:12:28.457 } 00:12:28.457 ] 00:12:28.457 }' 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.457 15:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.020 15:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:29.020 15:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:29.020 [2024-12-06 15:39:12.100116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.953 "name": "raid_bdev1", 00:12:29.953 "uuid": "4b778bd8-327a-4012-87dd-1e7a71e907a7", 00:12:29.953 "strip_size_kb": 64, 00:12:29.953 "state": "online", 00:12:29.953 "raid_level": "concat", 00:12:29.953 "superblock": true, 00:12:29.953 "num_base_bdevs": 3, 00:12:29.953 "num_base_bdevs_discovered": 3, 00:12:29.953 "num_base_bdevs_operational": 3, 00:12:29.953 "base_bdevs_list": [ 00:12:29.953 { 00:12:29.953 "name": "BaseBdev1", 00:12:29.953 "uuid": "d115ab69-d97b-5a6c-9594-e76e3192c2ef", 00:12:29.953 "is_configured": true, 00:12:29.953 "data_offset": 2048, 00:12:29.953 "data_size": 63488 00:12:29.953 }, 00:12:29.953 { 00:12:29.953 "name": "BaseBdev2", 00:12:29.953 "uuid": "2036f4b3-27f7-5a5e-92ff-6373da5afe5a", 00:12:29.953 "is_configured": true, 00:12:29.953 "data_offset": 2048, 00:12:29.953 "data_size": 63488 00:12:29.953 }, 00:12:29.953 { 00:12:29.953 "name": "BaseBdev3", 00:12:29.953 "uuid": "71d37cd5-306c-5e81-bc9c-c2197696f5fd", 00:12:29.953 "is_configured": true, 00:12:29.953 "data_offset": 2048, 00:12:29.953 "data_size": 63488 00:12:29.953 } 00:12:29.953 ] 00:12:29.953 }' 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.953 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.212 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.212 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.212 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.212 [2024-12-06 15:39:13.465621] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.212 [2024-12-06 15:39:13.465662] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.212 [2024-12-06 15:39:13.468461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.212 [2024-12-06 15:39:13.468536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.212 [2024-12-06 15:39:13.468583] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.212 [2024-12-06 15:39:13.468599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:30.212 { 00:12:30.212 "results": [ 00:12:30.212 { 00:12:30.212 "job": "raid_bdev1", 00:12:30.212 "core_mask": "0x1", 00:12:30.212 "workload": "randrw", 00:12:30.212 "percentage": 50, 00:12:30.212 "status": "finished", 00:12:30.212 "queue_depth": 1, 00:12:30.212 "io_size": 131072, 00:12:30.212 "runtime": 1.365259, 00:12:30.212 "iops": 13721.20601292502, 00:12:30.212 "mibps": 1715.1507516156275, 00:12:30.212 "io_failed": 1, 00:12:30.212 "io_timeout": 0, 00:12:30.212 "avg_latency_us": 102.0902217174452, 00:12:30.212 "min_latency_us": 27.142168674698794, 00:12:30.212 "max_latency_us": 1441.0024096385541 00:12:30.212 } 00:12:30.212 ], 00:12:30.212 "core_count": 1 00:12:30.212 } 00:12:30.212 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.212 15:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67131 00:12:30.212 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67131 ']' 00:12:30.212 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67131 00:12:30.212 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:30.212 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.212 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67131 00:12:30.469 killing process with pid 67131 00:12:30.469 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.469 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.469 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67131' 00:12:30.469 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67131 00:12:30.469 [2024-12-06 15:39:13.523596] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:30.469 15:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67131 00:12:30.753 [2024-12-06 15:39:13.783356] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:31.828 15:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kTFyoFevdH 00:12:31.828 15:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:31.828 15:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:32.085 15:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:32.085 15:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:32.085 15:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:32.085 15:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:32.085 15:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:32.085 00:12:32.085 real 0m4.739s 00:12:32.085 user 0m5.378s 00:12:32.085 sys 0m0.773s 00:12:32.085 15:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.085 ************************************ 00:12:32.085 END TEST raid_read_error_test 00:12:32.085 15:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.085 ************************************ 00:12:32.085 15:39:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:12:32.085 15:39:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:32.085 15:39:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.085 15:39:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.085 ************************************ 00:12:32.085 START TEST raid_write_error_test 00:12:32.085 ************************************ 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KsiUoF2EyA 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67282 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67282 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67282 ']' 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.085 15:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.085 [2024-12-06 15:39:15.315149] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:12:32.085 [2024-12-06 15:39:15.315302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67282 ] 00:12:32.395 [2024-12-06 15:39:15.490940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.395 [2024-12-06 15:39:15.634522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.652 [2024-12-06 15:39:15.886031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.652 [2024-12-06 15:39:15.886122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.909 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.909 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:32.909 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:32.909 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:32.909 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.909 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.167 BaseBdev1_malloc 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.167 true 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.167 [2024-12-06 15:39:16.222186] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:33.167 [2024-12-06 15:39:16.222254] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.167 [2024-12-06 15:39:16.222281] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:33.167 [2024-12-06 15:39:16.222297] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.167 [2024-12-06 15:39:16.225037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.167 [2024-12-06 15:39:16.225085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:33.167 BaseBdev1 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.167 BaseBdev2_malloc 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.167 true 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.167 [2024-12-06 15:39:16.301077] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:33.167 [2024-12-06 15:39:16.301143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.167 [2024-12-06 15:39:16.301162] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:33.167 [2024-12-06 15:39:16.301178] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.167 [2024-12-06 15:39:16.303848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.167 [2024-12-06 15:39:16.303894] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:33.167 BaseBdev2 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.167 BaseBdev3_malloc 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.167 true 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.167 [2024-12-06 15:39:16.385517] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:33.167 [2024-12-06 15:39:16.385576] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.167 [2024-12-06 15:39:16.385598] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:33.167 [2024-12-06 15:39:16.385614] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.167 [2024-12-06 15:39:16.388313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.167 [2024-12-06 15:39:16.388467] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:33.167 BaseBdev3 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.167 [2024-12-06 15:39:16.397600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.167 [2024-12-06 15:39:16.399994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.167 [2024-12-06 15:39:16.400073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.167 [2024-12-06 15:39:16.400286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:33.167 [2024-12-06 15:39:16.400299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:33.167 [2024-12-06 15:39:16.400592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:33.167 [2024-12-06 15:39:16.400770] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:33.167 [2024-12-06 15:39:16.400788] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:33.167 [2024-12-06 15:39:16.400942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.167 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.167 "name": "raid_bdev1", 00:12:33.167 "uuid": "1be7621b-fabd-49a9-82ad-96cfeab72bb9", 00:12:33.167 "strip_size_kb": 64, 00:12:33.167 "state": "online", 00:12:33.168 "raid_level": "concat", 00:12:33.168 "superblock": true, 00:12:33.168 "num_base_bdevs": 3, 00:12:33.168 "num_base_bdevs_discovered": 3, 00:12:33.168 "num_base_bdevs_operational": 3, 00:12:33.168 "base_bdevs_list": [ 00:12:33.168 { 00:12:33.168 "name": "BaseBdev1", 00:12:33.168 "uuid": "c6ca51ca-c648-51fe-90a7-c12b25c9f754", 00:12:33.168 "is_configured": true, 00:12:33.168 "data_offset": 2048, 00:12:33.168 "data_size": 63488 00:12:33.168 }, 00:12:33.168 { 00:12:33.168 "name": "BaseBdev2", 00:12:33.168 "uuid": "d0209a1d-649d-5072-a363-3a84b9ce1eb9", 00:12:33.168 "is_configured": true, 00:12:33.168 "data_offset": 2048, 00:12:33.168 "data_size": 63488 00:12:33.168 }, 00:12:33.168 { 00:12:33.168 "name": "BaseBdev3", 00:12:33.168 "uuid": "1b529659-7285-541b-a586-8a80f59e838b", 00:12:33.168 "is_configured": true, 00:12:33.168 "data_offset": 2048, 00:12:33.168 "data_size": 63488 00:12:33.168 } 00:12:33.168 ] 00:12:33.168 }' 00:12:33.168 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.168 15:39:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.731 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:33.731 15:39:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:33.731 [2024-12-06 15:39:16.914244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.660 "name": "raid_bdev1", 00:12:34.660 "uuid": "1be7621b-fabd-49a9-82ad-96cfeab72bb9", 00:12:34.660 "strip_size_kb": 64, 00:12:34.660 "state": "online", 00:12:34.660 "raid_level": "concat", 00:12:34.660 "superblock": true, 00:12:34.660 "num_base_bdevs": 3, 00:12:34.660 "num_base_bdevs_discovered": 3, 00:12:34.660 "num_base_bdevs_operational": 3, 00:12:34.660 "base_bdevs_list": [ 00:12:34.660 { 00:12:34.660 "name": "BaseBdev1", 00:12:34.660 "uuid": "c6ca51ca-c648-51fe-90a7-c12b25c9f754", 00:12:34.660 "is_configured": true, 00:12:34.660 "data_offset": 2048, 00:12:34.660 "data_size": 63488 00:12:34.660 }, 00:12:34.660 { 00:12:34.660 "name": "BaseBdev2", 00:12:34.660 "uuid": "d0209a1d-649d-5072-a363-3a84b9ce1eb9", 00:12:34.660 "is_configured": true, 00:12:34.660 "data_offset": 2048, 00:12:34.660 "data_size": 63488 00:12:34.660 }, 00:12:34.660 { 00:12:34.660 "name": "BaseBdev3", 00:12:34.660 "uuid": "1b529659-7285-541b-a586-8a80f59e838b", 00:12:34.660 "is_configured": true, 00:12:34.660 "data_offset": 2048, 00:12:34.660 "data_size": 63488 00:12:34.660 } 00:12:34.660 ] 00:12:34.660 }' 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.660 15:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.224 15:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:35.224 15:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.224 15:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.224 [2024-12-06 15:39:18.251648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:35.224 [2024-12-06 15:39:18.251684] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.224 [2024-12-06 15:39:18.254525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.224 [2024-12-06 15:39:18.254584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.224 [2024-12-06 15:39:18.254629] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.224 [2024-12-06 15:39:18.254645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:35.224 { 00:12:35.224 "results": [ 00:12:35.224 { 00:12:35.224 "job": "raid_bdev1", 00:12:35.224 "core_mask": "0x1", 00:12:35.224 "workload": "randrw", 00:12:35.224 "percentage": 50, 00:12:35.224 "status": "finished", 00:12:35.224 "queue_depth": 1, 00:12:35.224 "io_size": 131072, 00:12:35.224 "runtime": 1.336999, 00:12:35.224 "iops": 13534.04153630631, 00:12:35.224 "mibps": 1691.7551920382887, 00:12:35.224 "io_failed": 1, 00:12:35.224 "io_timeout": 0, 00:12:35.224 "avg_latency_us": 103.55070076947933, 00:12:35.224 "min_latency_us": 27.553413654618474, 00:12:35.224 "max_latency_us": 1421.2626506024096 00:12:35.224 } 00:12:35.224 ], 00:12:35.224 "core_count": 1 00:12:35.224 } 00:12:35.224 15:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.224 15:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67282 00:12:35.224 15:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67282 ']' 00:12:35.224 15:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67282 00:12:35.224 15:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:35.224 15:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.224 15:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67282 00:12:35.224 15:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.224 15:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.224 15:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67282' 00:12:35.224 killing process with pid 67282 00:12:35.224 15:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67282 00:12:35.224 15:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67282 00:12:35.224 [2024-12-06 15:39:18.303471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:35.481 [2024-12-06 15:39:18.558169] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.851 15:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KsiUoF2EyA 00:12:36.851 15:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:36.851 15:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:36.851 15:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:12:36.851 15:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:36.851 15:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:36.851 15:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:36.851 15:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:12:36.851 00:12:36.851 real 0m4.708s 00:12:36.851 user 0m5.374s 00:12:36.851 sys 0m0.740s 00:12:36.851 15:39:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.851 ************************************ 00:12:36.851 END TEST raid_write_error_test 00:12:36.851 ************************************ 00:12:36.851 15:39:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.851 15:39:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:36.851 15:39:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:12:36.851 15:39:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:36.851 15:39:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.851 15:39:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:36.851 ************************************ 00:12:36.851 START TEST raid_state_function_test 00:12:36.851 ************************************ 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:36.851 Process raid pid: 67420 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67420 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67420' 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67420 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67420 ']' 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.851 15:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.851 [2024-12-06 15:39:20.099986] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:12:36.851 [2024-12-06 15:39:20.100142] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.109 [2024-12-06 15:39:20.288220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.367 [2024-12-06 15:39:20.442281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.625 [2024-12-06 15:39:20.686973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.625 [2024-12-06 15:39:20.687042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.884 [2024-12-06 15:39:20.959160] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:37.884 [2024-12-06 15:39:20.959391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:37.884 [2024-12-06 15:39:20.959419] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:37.884 [2024-12-06 15:39:20.959436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:37.884 [2024-12-06 15:39:20.959444] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:37.884 [2024-12-06 15:39:20.959457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.884 15:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.884 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.884 "name": "Existed_Raid", 00:12:37.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.884 "strip_size_kb": 0, 00:12:37.884 "state": "configuring", 00:12:37.884 "raid_level": "raid1", 00:12:37.884 "superblock": false, 00:12:37.884 "num_base_bdevs": 3, 00:12:37.884 "num_base_bdevs_discovered": 0, 00:12:37.884 "num_base_bdevs_operational": 3, 00:12:37.884 "base_bdevs_list": [ 00:12:37.884 { 00:12:37.884 "name": "BaseBdev1", 00:12:37.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.884 "is_configured": false, 00:12:37.884 "data_offset": 0, 00:12:37.884 "data_size": 0 00:12:37.884 }, 00:12:37.884 { 00:12:37.884 "name": "BaseBdev2", 00:12:37.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.884 "is_configured": false, 00:12:37.884 "data_offset": 0, 00:12:37.884 "data_size": 0 00:12:37.884 }, 00:12:37.884 { 00:12:37.884 "name": "BaseBdev3", 00:12:37.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.884 "is_configured": false, 00:12:37.884 "data_offset": 0, 00:12:37.884 "data_size": 0 00:12:37.884 } 00:12:37.884 ] 00:12:37.884 }' 00:12:37.884 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.884 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.142 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:38.142 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.142 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.142 [2024-12-06 15:39:21.362558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:38.142 [2024-12-06 15:39:21.362753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:38.142 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.142 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:38.142 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.142 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.142 [2024-12-06 15:39:21.374494] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:38.142 [2024-12-06 15:39:21.374693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:38.142 [2024-12-06 15:39:21.374716] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:38.142 [2024-12-06 15:39:21.374732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:38.142 [2024-12-06 15:39:21.374741] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:38.142 [2024-12-06 15:39:21.374755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:38.142 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.142 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:38.142 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.142 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.142 [2024-12-06 15:39:21.434089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.401 BaseBdev1 00:12:38.401 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.401 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:38.401 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:38.401 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:38.401 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:38.401 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:38.401 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:38.401 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:38.401 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.401 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.401 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.401 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:38.401 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.401 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.401 [ 00:12:38.401 { 00:12:38.401 "name": "BaseBdev1", 00:12:38.401 "aliases": [ 00:12:38.401 "78ffbcff-1206-4666-8030-03ddd44cc177" 00:12:38.401 ], 00:12:38.401 "product_name": "Malloc disk", 00:12:38.401 "block_size": 512, 00:12:38.401 "num_blocks": 65536, 00:12:38.401 "uuid": "78ffbcff-1206-4666-8030-03ddd44cc177", 00:12:38.401 "assigned_rate_limits": { 00:12:38.401 "rw_ios_per_sec": 0, 00:12:38.401 "rw_mbytes_per_sec": 0, 00:12:38.401 "r_mbytes_per_sec": 0, 00:12:38.401 "w_mbytes_per_sec": 0 00:12:38.401 }, 00:12:38.401 "claimed": true, 00:12:38.401 "claim_type": "exclusive_write", 00:12:38.401 "zoned": false, 00:12:38.401 "supported_io_types": { 00:12:38.401 "read": true, 00:12:38.401 "write": true, 00:12:38.401 "unmap": true, 00:12:38.401 "flush": true, 00:12:38.401 "reset": true, 00:12:38.401 "nvme_admin": false, 00:12:38.401 "nvme_io": false, 00:12:38.401 "nvme_io_md": false, 00:12:38.401 "write_zeroes": true, 00:12:38.401 "zcopy": true, 00:12:38.401 "get_zone_info": false, 00:12:38.401 "zone_management": false, 00:12:38.401 "zone_append": false, 00:12:38.401 "compare": false, 00:12:38.401 "compare_and_write": false, 00:12:38.401 "abort": true, 00:12:38.401 "seek_hole": false, 00:12:38.402 "seek_data": false, 00:12:38.402 "copy": true, 00:12:38.402 "nvme_iov_md": false 00:12:38.402 }, 00:12:38.402 "memory_domains": [ 00:12:38.402 { 00:12:38.402 "dma_device_id": "system", 00:12:38.402 "dma_device_type": 1 00:12:38.402 }, 00:12:38.402 { 00:12:38.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.402 "dma_device_type": 2 00:12:38.402 } 00:12:38.402 ], 00:12:38.402 "driver_specific": {} 00:12:38.402 } 00:12:38.402 ] 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.402 "name": "Existed_Raid", 00:12:38.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.402 "strip_size_kb": 0, 00:12:38.402 "state": "configuring", 00:12:38.402 "raid_level": "raid1", 00:12:38.402 "superblock": false, 00:12:38.402 "num_base_bdevs": 3, 00:12:38.402 "num_base_bdevs_discovered": 1, 00:12:38.402 "num_base_bdevs_operational": 3, 00:12:38.402 "base_bdevs_list": [ 00:12:38.402 { 00:12:38.402 "name": "BaseBdev1", 00:12:38.402 "uuid": "78ffbcff-1206-4666-8030-03ddd44cc177", 00:12:38.402 "is_configured": true, 00:12:38.402 "data_offset": 0, 00:12:38.402 "data_size": 65536 00:12:38.402 }, 00:12:38.402 { 00:12:38.402 "name": "BaseBdev2", 00:12:38.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.402 "is_configured": false, 00:12:38.402 "data_offset": 0, 00:12:38.402 "data_size": 0 00:12:38.402 }, 00:12:38.402 { 00:12:38.402 "name": "BaseBdev3", 00:12:38.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.402 "is_configured": false, 00:12:38.402 "data_offset": 0, 00:12:38.402 "data_size": 0 00:12:38.402 } 00:12:38.402 ] 00:12:38.402 }' 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.402 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.659 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:38.659 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.659 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.659 [2024-12-06 15:39:21.929497] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:38.659 [2024-12-06 15:39:21.929750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.660 [2024-12-06 15:39:21.937569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.660 [2024-12-06 15:39:21.940050] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:38.660 [2024-12-06 15:39:21.940108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:38.660 [2024-12-06 15:39:21.940122] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:38.660 [2024-12-06 15:39:21.940135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.660 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.918 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.918 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.918 "name": "Existed_Raid", 00:12:38.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.918 "strip_size_kb": 0, 00:12:38.918 "state": "configuring", 00:12:38.918 "raid_level": "raid1", 00:12:38.918 "superblock": false, 00:12:38.918 "num_base_bdevs": 3, 00:12:38.918 "num_base_bdevs_discovered": 1, 00:12:38.918 "num_base_bdevs_operational": 3, 00:12:38.918 "base_bdevs_list": [ 00:12:38.918 { 00:12:38.918 "name": "BaseBdev1", 00:12:38.918 "uuid": "78ffbcff-1206-4666-8030-03ddd44cc177", 00:12:38.918 "is_configured": true, 00:12:38.918 "data_offset": 0, 00:12:38.918 "data_size": 65536 00:12:38.918 }, 00:12:38.918 { 00:12:38.918 "name": "BaseBdev2", 00:12:38.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.918 "is_configured": false, 00:12:38.918 "data_offset": 0, 00:12:38.918 "data_size": 0 00:12:38.918 }, 00:12:38.918 { 00:12:38.918 "name": "BaseBdev3", 00:12:38.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.918 "is_configured": false, 00:12:38.918 "data_offset": 0, 00:12:38.918 "data_size": 0 00:12:38.918 } 00:12:38.918 ] 00:12:38.918 }' 00:12:38.918 15:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.918 15:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.176 [2024-12-06 15:39:22.415327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.176 BaseBdev2 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.176 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.176 [ 00:12:39.176 { 00:12:39.176 "name": "BaseBdev2", 00:12:39.176 "aliases": [ 00:12:39.176 "6e4a6072-9b28-4991-8577-a9443372f3f3" 00:12:39.176 ], 00:12:39.176 "product_name": "Malloc disk", 00:12:39.176 "block_size": 512, 00:12:39.176 "num_blocks": 65536, 00:12:39.176 "uuid": "6e4a6072-9b28-4991-8577-a9443372f3f3", 00:12:39.177 "assigned_rate_limits": { 00:12:39.177 "rw_ios_per_sec": 0, 00:12:39.177 "rw_mbytes_per_sec": 0, 00:12:39.177 "r_mbytes_per_sec": 0, 00:12:39.177 "w_mbytes_per_sec": 0 00:12:39.177 }, 00:12:39.177 "claimed": true, 00:12:39.177 "claim_type": "exclusive_write", 00:12:39.177 "zoned": false, 00:12:39.177 "supported_io_types": { 00:12:39.177 "read": true, 00:12:39.177 "write": true, 00:12:39.177 "unmap": true, 00:12:39.177 "flush": true, 00:12:39.177 "reset": true, 00:12:39.177 "nvme_admin": false, 00:12:39.177 "nvme_io": false, 00:12:39.177 "nvme_io_md": false, 00:12:39.177 "write_zeroes": true, 00:12:39.177 "zcopy": true, 00:12:39.177 "get_zone_info": false, 00:12:39.177 "zone_management": false, 00:12:39.177 "zone_append": false, 00:12:39.177 "compare": false, 00:12:39.177 "compare_and_write": false, 00:12:39.177 "abort": true, 00:12:39.177 "seek_hole": false, 00:12:39.177 "seek_data": false, 00:12:39.177 "copy": true, 00:12:39.177 "nvme_iov_md": false 00:12:39.177 }, 00:12:39.177 "memory_domains": [ 00:12:39.177 { 00:12:39.177 "dma_device_id": "system", 00:12:39.177 "dma_device_type": 1 00:12:39.177 }, 00:12:39.177 { 00:12:39.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.177 "dma_device_type": 2 00:12:39.177 } 00:12:39.177 ], 00:12:39.177 "driver_specific": {} 00:12:39.177 } 00:12:39.177 ] 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.177 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.435 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.435 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.435 "name": "Existed_Raid", 00:12:39.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.435 "strip_size_kb": 0, 00:12:39.435 "state": "configuring", 00:12:39.435 "raid_level": "raid1", 00:12:39.435 "superblock": false, 00:12:39.435 "num_base_bdevs": 3, 00:12:39.435 "num_base_bdevs_discovered": 2, 00:12:39.435 "num_base_bdevs_operational": 3, 00:12:39.435 "base_bdevs_list": [ 00:12:39.435 { 00:12:39.435 "name": "BaseBdev1", 00:12:39.435 "uuid": "78ffbcff-1206-4666-8030-03ddd44cc177", 00:12:39.435 "is_configured": true, 00:12:39.435 "data_offset": 0, 00:12:39.435 "data_size": 65536 00:12:39.435 }, 00:12:39.435 { 00:12:39.435 "name": "BaseBdev2", 00:12:39.435 "uuid": "6e4a6072-9b28-4991-8577-a9443372f3f3", 00:12:39.435 "is_configured": true, 00:12:39.435 "data_offset": 0, 00:12:39.435 "data_size": 65536 00:12:39.435 }, 00:12:39.435 { 00:12:39.435 "name": "BaseBdev3", 00:12:39.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.435 "is_configured": false, 00:12:39.435 "data_offset": 0, 00:12:39.435 "data_size": 0 00:12:39.435 } 00:12:39.435 ] 00:12:39.435 }' 00:12:39.435 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.435 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.694 [2024-12-06 15:39:22.938749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:39.694 [2024-12-06 15:39:22.938831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:39.694 [2024-12-06 15:39:22.938850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:39.694 [2024-12-06 15:39:22.939206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:39.694 [2024-12-06 15:39:22.939420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:39.694 [2024-12-06 15:39:22.939431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:39.694 [2024-12-06 15:39:22.939789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.694 BaseBdev3 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.694 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.694 [ 00:12:39.694 { 00:12:39.694 "name": "BaseBdev3", 00:12:39.694 "aliases": [ 00:12:39.694 "36838791-bd83-46b9-b549-1516e725329f" 00:12:39.694 ], 00:12:39.694 "product_name": "Malloc disk", 00:12:39.694 "block_size": 512, 00:12:39.694 "num_blocks": 65536, 00:12:39.694 "uuid": "36838791-bd83-46b9-b549-1516e725329f", 00:12:39.694 "assigned_rate_limits": { 00:12:39.694 "rw_ios_per_sec": 0, 00:12:39.694 "rw_mbytes_per_sec": 0, 00:12:39.694 "r_mbytes_per_sec": 0, 00:12:39.694 "w_mbytes_per_sec": 0 00:12:39.694 }, 00:12:39.694 "claimed": true, 00:12:39.694 "claim_type": "exclusive_write", 00:12:39.694 "zoned": false, 00:12:39.694 "supported_io_types": { 00:12:39.694 "read": true, 00:12:39.694 "write": true, 00:12:39.694 "unmap": true, 00:12:39.694 "flush": true, 00:12:39.694 "reset": true, 00:12:39.694 "nvme_admin": false, 00:12:39.694 "nvme_io": false, 00:12:39.694 "nvme_io_md": false, 00:12:39.694 "write_zeroes": true, 00:12:39.694 "zcopy": true, 00:12:39.694 "get_zone_info": false, 00:12:39.694 "zone_management": false, 00:12:39.694 "zone_append": false, 00:12:39.694 "compare": false, 00:12:39.953 "compare_and_write": false, 00:12:39.953 "abort": true, 00:12:39.953 "seek_hole": false, 00:12:39.953 "seek_data": false, 00:12:39.953 "copy": true, 00:12:39.953 "nvme_iov_md": false 00:12:39.953 }, 00:12:39.953 "memory_domains": [ 00:12:39.953 { 00:12:39.953 "dma_device_id": "system", 00:12:39.953 "dma_device_type": 1 00:12:39.953 }, 00:12:39.953 { 00:12:39.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.953 "dma_device_type": 2 00:12:39.953 } 00:12:39.953 ], 00:12:39.953 "driver_specific": {} 00:12:39.953 } 00:12:39.953 ] 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.953 15:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.953 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.953 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.953 "name": "Existed_Raid", 00:12:39.953 "uuid": "fb647cf5-66fd-49af-9f17-c8186ced1917", 00:12:39.953 "strip_size_kb": 0, 00:12:39.953 "state": "online", 00:12:39.953 "raid_level": "raid1", 00:12:39.953 "superblock": false, 00:12:39.953 "num_base_bdevs": 3, 00:12:39.953 "num_base_bdevs_discovered": 3, 00:12:39.953 "num_base_bdevs_operational": 3, 00:12:39.953 "base_bdevs_list": [ 00:12:39.953 { 00:12:39.953 "name": "BaseBdev1", 00:12:39.953 "uuid": "78ffbcff-1206-4666-8030-03ddd44cc177", 00:12:39.953 "is_configured": true, 00:12:39.953 "data_offset": 0, 00:12:39.953 "data_size": 65536 00:12:39.953 }, 00:12:39.953 { 00:12:39.953 "name": "BaseBdev2", 00:12:39.953 "uuid": "6e4a6072-9b28-4991-8577-a9443372f3f3", 00:12:39.953 "is_configured": true, 00:12:39.953 "data_offset": 0, 00:12:39.953 "data_size": 65536 00:12:39.953 }, 00:12:39.953 { 00:12:39.953 "name": "BaseBdev3", 00:12:39.954 "uuid": "36838791-bd83-46b9-b549-1516e725329f", 00:12:39.954 "is_configured": true, 00:12:39.954 "data_offset": 0, 00:12:39.954 "data_size": 65536 00:12:39.954 } 00:12:39.954 ] 00:12:39.954 }' 00:12:39.954 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.954 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.212 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:40.212 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:40.212 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:40.212 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:40.212 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:40.212 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:40.212 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:40.212 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:40.212 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.212 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.212 [2024-12-06 15:39:23.446497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.212 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.212 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:40.212 "name": "Existed_Raid", 00:12:40.212 "aliases": [ 00:12:40.212 "fb647cf5-66fd-49af-9f17-c8186ced1917" 00:12:40.212 ], 00:12:40.212 "product_name": "Raid Volume", 00:12:40.212 "block_size": 512, 00:12:40.212 "num_blocks": 65536, 00:12:40.212 "uuid": "fb647cf5-66fd-49af-9f17-c8186ced1917", 00:12:40.212 "assigned_rate_limits": { 00:12:40.212 "rw_ios_per_sec": 0, 00:12:40.212 "rw_mbytes_per_sec": 0, 00:12:40.212 "r_mbytes_per_sec": 0, 00:12:40.212 "w_mbytes_per_sec": 0 00:12:40.212 }, 00:12:40.212 "claimed": false, 00:12:40.212 "zoned": false, 00:12:40.212 "supported_io_types": { 00:12:40.212 "read": true, 00:12:40.212 "write": true, 00:12:40.212 "unmap": false, 00:12:40.212 "flush": false, 00:12:40.212 "reset": true, 00:12:40.212 "nvme_admin": false, 00:12:40.212 "nvme_io": false, 00:12:40.212 "nvme_io_md": false, 00:12:40.212 "write_zeroes": true, 00:12:40.212 "zcopy": false, 00:12:40.212 "get_zone_info": false, 00:12:40.212 "zone_management": false, 00:12:40.212 "zone_append": false, 00:12:40.212 "compare": false, 00:12:40.212 "compare_and_write": false, 00:12:40.212 "abort": false, 00:12:40.212 "seek_hole": false, 00:12:40.212 "seek_data": false, 00:12:40.212 "copy": false, 00:12:40.212 "nvme_iov_md": false 00:12:40.212 }, 00:12:40.212 "memory_domains": [ 00:12:40.212 { 00:12:40.212 "dma_device_id": "system", 00:12:40.212 "dma_device_type": 1 00:12:40.212 }, 00:12:40.212 { 00:12:40.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.212 "dma_device_type": 2 00:12:40.212 }, 00:12:40.212 { 00:12:40.212 "dma_device_id": "system", 00:12:40.212 "dma_device_type": 1 00:12:40.212 }, 00:12:40.212 { 00:12:40.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.212 "dma_device_type": 2 00:12:40.212 }, 00:12:40.212 { 00:12:40.212 "dma_device_id": "system", 00:12:40.212 "dma_device_type": 1 00:12:40.212 }, 00:12:40.212 { 00:12:40.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.212 "dma_device_type": 2 00:12:40.212 } 00:12:40.212 ], 00:12:40.212 "driver_specific": { 00:12:40.212 "raid": { 00:12:40.212 "uuid": "fb647cf5-66fd-49af-9f17-c8186ced1917", 00:12:40.212 "strip_size_kb": 0, 00:12:40.212 "state": "online", 00:12:40.212 "raid_level": "raid1", 00:12:40.212 "superblock": false, 00:12:40.212 "num_base_bdevs": 3, 00:12:40.212 "num_base_bdevs_discovered": 3, 00:12:40.212 "num_base_bdevs_operational": 3, 00:12:40.212 "base_bdevs_list": [ 00:12:40.212 { 00:12:40.212 "name": "BaseBdev1", 00:12:40.212 "uuid": "78ffbcff-1206-4666-8030-03ddd44cc177", 00:12:40.212 "is_configured": true, 00:12:40.212 "data_offset": 0, 00:12:40.212 "data_size": 65536 00:12:40.212 }, 00:12:40.212 { 00:12:40.212 "name": "BaseBdev2", 00:12:40.212 "uuid": "6e4a6072-9b28-4991-8577-a9443372f3f3", 00:12:40.212 "is_configured": true, 00:12:40.212 "data_offset": 0, 00:12:40.212 "data_size": 65536 00:12:40.212 }, 00:12:40.212 { 00:12:40.212 "name": "BaseBdev3", 00:12:40.212 "uuid": "36838791-bd83-46b9-b549-1516e725329f", 00:12:40.212 "is_configured": true, 00:12:40.212 "data_offset": 0, 00:12:40.212 "data_size": 65536 00:12:40.212 } 00:12:40.212 ] 00:12:40.212 } 00:12:40.212 } 00:12:40.212 }' 00:12:40.212 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:40.470 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:40.471 BaseBdev2 00:12:40.471 BaseBdev3' 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.471 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.471 [2024-12-06 15:39:23.693848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.729 "name": "Existed_Raid", 00:12:40.729 "uuid": "fb647cf5-66fd-49af-9f17-c8186ced1917", 00:12:40.729 "strip_size_kb": 0, 00:12:40.729 "state": "online", 00:12:40.729 "raid_level": "raid1", 00:12:40.729 "superblock": false, 00:12:40.729 "num_base_bdevs": 3, 00:12:40.729 "num_base_bdevs_discovered": 2, 00:12:40.729 "num_base_bdevs_operational": 2, 00:12:40.729 "base_bdevs_list": [ 00:12:40.729 { 00:12:40.729 "name": null, 00:12:40.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.729 "is_configured": false, 00:12:40.729 "data_offset": 0, 00:12:40.729 "data_size": 65536 00:12:40.729 }, 00:12:40.729 { 00:12:40.729 "name": "BaseBdev2", 00:12:40.729 "uuid": "6e4a6072-9b28-4991-8577-a9443372f3f3", 00:12:40.729 "is_configured": true, 00:12:40.729 "data_offset": 0, 00:12:40.729 "data_size": 65536 00:12:40.729 }, 00:12:40.729 { 00:12:40.729 "name": "BaseBdev3", 00:12:40.729 "uuid": "36838791-bd83-46b9-b549-1516e725329f", 00:12:40.729 "is_configured": true, 00:12:40.729 "data_offset": 0, 00:12:40.729 "data_size": 65536 00:12:40.729 } 00:12:40.729 ] 00:12:40.729 }' 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.729 15:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.987 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:40.987 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:40.987 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.987 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:40.987 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.987 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.987 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.987 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:40.987 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:40.987 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:40.987 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.987 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.987 [2024-12-06 15:39:24.248811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.246 [2024-12-06 15:39:24.407380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:41.246 [2024-12-06 15:39:24.407541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.246 [2024-12-06 15:39:24.514536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.246 [2024-12-06 15:39:24.514626] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.246 [2024-12-06 15:39:24.514644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:41.246 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.505 BaseBdev2 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.505 [ 00:12:41.505 { 00:12:41.505 "name": "BaseBdev2", 00:12:41.505 "aliases": [ 00:12:41.505 "b1358547-9fe9-4331-b1b4-422ab99364a9" 00:12:41.505 ], 00:12:41.505 "product_name": "Malloc disk", 00:12:41.505 "block_size": 512, 00:12:41.505 "num_blocks": 65536, 00:12:41.505 "uuid": "b1358547-9fe9-4331-b1b4-422ab99364a9", 00:12:41.505 "assigned_rate_limits": { 00:12:41.505 "rw_ios_per_sec": 0, 00:12:41.505 "rw_mbytes_per_sec": 0, 00:12:41.505 "r_mbytes_per_sec": 0, 00:12:41.505 "w_mbytes_per_sec": 0 00:12:41.505 }, 00:12:41.505 "claimed": false, 00:12:41.505 "zoned": false, 00:12:41.505 "supported_io_types": { 00:12:41.505 "read": true, 00:12:41.505 "write": true, 00:12:41.505 "unmap": true, 00:12:41.505 "flush": true, 00:12:41.505 "reset": true, 00:12:41.505 "nvme_admin": false, 00:12:41.505 "nvme_io": false, 00:12:41.505 "nvme_io_md": false, 00:12:41.505 "write_zeroes": true, 00:12:41.505 "zcopy": true, 00:12:41.505 "get_zone_info": false, 00:12:41.505 "zone_management": false, 00:12:41.505 "zone_append": false, 00:12:41.505 "compare": false, 00:12:41.505 "compare_and_write": false, 00:12:41.505 "abort": true, 00:12:41.505 "seek_hole": false, 00:12:41.505 "seek_data": false, 00:12:41.505 "copy": true, 00:12:41.505 "nvme_iov_md": false 00:12:41.505 }, 00:12:41.505 "memory_domains": [ 00:12:41.505 { 00:12:41.505 "dma_device_id": "system", 00:12:41.505 "dma_device_type": 1 00:12:41.505 }, 00:12:41.505 { 00:12:41.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.505 "dma_device_type": 2 00:12:41.505 } 00:12:41.505 ], 00:12:41.505 "driver_specific": {} 00:12:41.505 } 00:12:41.505 ] 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.505 BaseBdev3 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:41.505 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.506 [ 00:12:41.506 { 00:12:41.506 "name": "BaseBdev3", 00:12:41.506 "aliases": [ 00:12:41.506 "6669aaa7-10d6-4d0b-a418-91b0712a9192" 00:12:41.506 ], 00:12:41.506 "product_name": "Malloc disk", 00:12:41.506 "block_size": 512, 00:12:41.506 "num_blocks": 65536, 00:12:41.506 "uuid": "6669aaa7-10d6-4d0b-a418-91b0712a9192", 00:12:41.506 "assigned_rate_limits": { 00:12:41.506 "rw_ios_per_sec": 0, 00:12:41.506 "rw_mbytes_per_sec": 0, 00:12:41.506 "r_mbytes_per_sec": 0, 00:12:41.506 "w_mbytes_per_sec": 0 00:12:41.506 }, 00:12:41.506 "claimed": false, 00:12:41.506 "zoned": false, 00:12:41.506 "supported_io_types": { 00:12:41.506 "read": true, 00:12:41.506 "write": true, 00:12:41.506 "unmap": true, 00:12:41.506 "flush": true, 00:12:41.506 "reset": true, 00:12:41.506 "nvme_admin": false, 00:12:41.506 "nvme_io": false, 00:12:41.506 "nvme_io_md": false, 00:12:41.506 "write_zeroes": true, 00:12:41.506 "zcopy": true, 00:12:41.506 "get_zone_info": false, 00:12:41.506 "zone_management": false, 00:12:41.506 "zone_append": false, 00:12:41.506 "compare": false, 00:12:41.506 "compare_and_write": false, 00:12:41.506 "abort": true, 00:12:41.506 "seek_hole": false, 00:12:41.506 "seek_data": false, 00:12:41.506 "copy": true, 00:12:41.506 "nvme_iov_md": false 00:12:41.506 }, 00:12:41.506 "memory_domains": [ 00:12:41.506 { 00:12:41.506 "dma_device_id": "system", 00:12:41.506 "dma_device_type": 1 00:12:41.506 }, 00:12:41.506 { 00:12:41.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.506 "dma_device_type": 2 00:12:41.506 } 00:12:41.506 ], 00:12:41.506 "driver_specific": {} 00:12:41.506 } 00:12:41.506 ] 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.506 [2024-12-06 15:39:24.764330] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:41.506 [2024-12-06 15:39:24.764397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:41.506 [2024-12-06 15:39:24.764426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.506 [2024-12-06 15:39:24.767073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.506 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.799 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.799 "name": "Existed_Raid", 00:12:41.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.799 "strip_size_kb": 0, 00:12:41.799 "state": "configuring", 00:12:41.799 "raid_level": "raid1", 00:12:41.799 "superblock": false, 00:12:41.799 "num_base_bdevs": 3, 00:12:41.799 "num_base_bdevs_discovered": 2, 00:12:41.799 "num_base_bdevs_operational": 3, 00:12:41.799 "base_bdevs_list": [ 00:12:41.799 { 00:12:41.799 "name": "BaseBdev1", 00:12:41.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.799 "is_configured": false, 00:12:41.799 "data_offset": 0, 00:12:41.799 "data_size": 0 00:12:41.799 }, 00:12:41.799 { 00:12:41.799 "name": "BaseBdev2", 00:12:41.799 "uuid": "b1358547-9fe9-4331-b1b4-422ab99364a9", 00:12:41.799 "is_configured": true, 00:12:41.799 "data_offset": 0, 00:12:41.799 "data_size": 65536 00:12:41.799 }, 00:12:41.799 { 00:12:41.799 "name": "BaseBdev3", 00:12:41.799 "uuid": "6669aaa7-10d6-4d0b-a418-91b0712a9192", 00:12:41.799 "is_configured": true, 00:12:41.799 "data_offset": 0, 00:12:41.799 "data_size": 65536 00:12:41.799 } 00:12:41.799 ] 00:12:41.799 }' 00:12:41.799 15:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.799 15:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.084 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:42.084 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.084 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.084 [2024-12-06 15:39:25.195765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:42.084 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.084 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:42.084 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.084 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.085 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.085 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.085 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.085 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.085 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.085 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.085 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.085 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.085 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.085 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.085 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.085 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.085 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.085 "name": "Existed_Raid", 00:12:42.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.085 "strip_size_kb": 0, 00:12:42.085 "state": "configuring", 00:12:42.085 "raid_level": "raid1", 00:12:42.085 "superblock": false, 00:12:42.085 "num_base_bdevs": 3, 00:12:42.085 "num_base_bdevs_discovered": 1, 00:12:42.085 "num_base_bdevs_operational": 3, 00:12:42.085 "base_bdevs_list": [ 00:12:42.085 { 00:12:42.085 "name": "BaseBdev1", 00:12:42.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.085 "is_configured": false, 00:12:42.085 "data_offset": 0, 00:12:42.085 "data_size": 0 00:12:42.085 }, 00:12:42.085 { 00:12:42.085 "name": null, 00:12:42.085 "uuid": "b1358547-9fe9-4331-b1b4-422ab99364a9", 00:12:42.085 "is_configured": false, 00:12:42.085 "data_offset": 0, 00:12:42.085 "data_size": 65536 00:12:42.085 }, 00:12:42.085 { 00:12:42.085 "name": "BaseBdev3", 00:12:42.085 "uuid": "6669aaa7-10d6-4d0b-a418-91b0712a9192", 00:12:42.085 "is_configured": true, 00:12:42.085 "data_offset": 0, 00:12:42.085 "data_size": 65536 00:12:42.085 } 00:12:42.085 ] 00:12:42.085 }' 00:12:42.085 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.085 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.342 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:42.342 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.342 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.342 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.599 [2024-12-06 15:39:25.699763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.599 BaseBdev1 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.599 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.600 [ 00:12:42.600 { 00:12:42.600 "name": "BaseBdev1", 00:12:42.600 "aliases": [ 00:12:42.600 "4ff41c45-0b0a-4831-87da-f837b485feed" 00:12:42.600 ], 00:12:42.600 "product_name": "Malloc disk", 00:12:42.600 "block_size": 512, 00:12:42.600 "num_blocks": 65536, 00:12:42.600 "uuid": "4ff41c45-0b0a-4831-87da-f837b485feed", 00:12:42.600 "assigned_rate_limits": { 00:12:42.600 "rw_ios_per_sec": 0, 00:12:42.600 "rw_mbytes_per_sec": 0, 00:12:42.600 "r_mbytes_per_sec": 0, 00:12:42.600 "w_mbytes_per_sec": 0 00:12:42.600 }, 00:12:42.600 "claimed": true, 00:12:42.600 "claim_type": "exclusive_write", 00:12:42.600 "zoned": false, 00:12:42.600 "supported_io_types": { 00:12:42.600 "read": true, 00:12:42.600 "write": true, 00:12:42.600 "unmap": true, 00:12:42.600 "flush": true, 00:12:42.600 "reset": true, 00:12:42.600 "nvme_admin": false, 00:12:42.600 "nvme_io": false, 00:12:42.600 "nvme_io_md": false, 00:12:42.600 "write_zeroes": true, 00:12:42.600 "zcopy": true, 00:12:42.600 "get_zone_info": false, 00:12:42.600 "zone_management": false, 00:12:42.600 "zone_append": false, 00:12:42.600 "compare": false, 00:12:42.600 "compare_and_write": false, 00:12:42.600 "abort": true, 00:12:42.600 "seek_hole": false, 00:12:42.600 "seek_data": false, 00:12:42.600 "copy": true, 00:12:42.600 "nvme_iov_md": false 00:12:42.600 }, 00:12:42.600 "memory_domains": [ 00:12:42.600 { 00:12:42.600 "dma_device_id": "system", 00:12:42.600 "dma_device_type": 1 00:12:42.600 }, 00:12:42.600 { 00:12:42.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.600 "dma_device_type": 2 00:12:42.600 } 00:12:42.600 ], 00:12:42.600 "driver_specific": {} 00:12:42.600 } 00:12:42.600 ] 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.600 "name": "Existed_Raid", 00:12:42.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.600 "strip_size_kb": 0, 00:12:42.600 "state": "configuring", 00:12:42.600 "raid_level": "raid1", 00:12:42.600 "superblock": false, 00:12:42.600 "num_base_bdevs": 3, 00:12:42.600 "num_base_bdevs_discovered": 2, 00:12:42.600 "num_base_bdevs_operational": 3, 00:12:42.600 "base_bdevs_list": [ 00:12:42.600 { 00:12:42.600 "name": "BaseBdev1", 00:12:42.600 "uuid": "4ff41c45-0b0a-4831-87da-f837b485feed", 00:12:42.600 "is_configured": true, 00:12:42.600 "data_offset": 0, 00:12:42.600 "data_size": 65536 00:12:42.600 }, 00:12:42.600 { 00:12:42.600 "name": null, 00:12:42.600 "uuid": "b1358547-9fe9-4331-b1b4-422ab99364a9", 00:12:42.600 "is_configured": false, 00:12:42.600 "data_offset": 0, 00:12:42.600 "data_size": 65536 00:12:42.600 }, 00:12:42.600 { 00:12:42.600 "name": "BaseBdev3", 00:12:42.600 "uuid": "6669aaa7-10d6-4d0b-a418-91b0712a9192", 00:12:42.600 "is_configured": true, 00:12:42.600 "data_offset": 0, 00:12:42.600 "data_size": 65536 00:12:42.600 } 00:12:42.600 ] 00:12:42.600 }' 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.600 15:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.165 [2024-12-06 15:39:26.215116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.165 "name": "Existed_Raid", 00:12:43.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.165 "strip_size_kb": 0, 00:12:43.165 "state": "configuring", 00:12:43.165 "raid_level": "raid1", 00:12:43.165 "superblock": false, 00:12:43.165 "num_base_bdevs": 3, 00:12:43.165 "num_base_bdevs_discovered": 1, 00:12:43.165 "num_base_bdevs_operational": 3, 00:12:43.165 "base_bdevs_list": [ 00:12:43.165 { 00:12:43.165 "name": "BaseBdev1", 00:12:43.165 "uuid": "4ff41c45-0b0a-4831-87da-f837b485feed", 00:12:43.165 "is_configured": true, 00:12:43.165 "data_offset": 0, 00:12:43.165 "data_size": 65536 00:12:43.165 }, 00:12:43.165 { 00:12:43.165 "name": null, 00:12:43.165 "uuid": "b1358547-9fe9-4331-b1b4-422ab99364a9", 00:12:43.165 "is_configured": false, 00:12:43.165 "data_offset": 0, 00:12:43.165 "data_size": 65536 00:12:43.165 }, 00:12:43.165 { 00:12:43.165 "name": null, 00:12:43.165 "uuid": "6669aaa7-10d6-4d0b-a418-91b0712a9192", 00:12:43.165 "is_configured": false, 00:12:43.165 "data_offset": 0, 00:12:43.165 "data_size": 65536 00:12:43.165 } 00:12:43.165 ] 00:12:43.165 }' 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.165 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.423 [2024-12-06 15:39:26.690608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.423 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.681 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.681 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.681 "name": "Existed_Raid", 00:12:43.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.681 "strip_size_kb": 0, 00:12:43.681 "state": "configuring", 00:12:43.681 "raid_level": "raid1", 00:12:43.681 "superblock": false, 00:12:43.681 "num_base_bdevs": 3, 00:12:43.681 "num_base_bdevs_discovered": 2, 00:12:43.681 "num_base_bdevs_operational": 3, 00:12:43.681 "base_bdevs_list": [ 00:12:43.681 { 00:12:43.681 "name": "BaseBdev1", 00:12:43.681 "uuid": "4ff41c45-0b0a-4831-87da-f837b485feed", 00:12:43.681 "is_configured": true, 00:12:43.681 "data_offset": 0, 00:12:43.681 "data_size": 65536 00:12:43.681 }, 00:12:43.681 { 00:12:43.681 "name": null, 00:12:43.681 "uuid": "b1358547-9fe9-4331-b1b4-422ab99364a9", 00:12:43.681 "is_configured": false, 00:12:43.681 "data_offset": 0, 00:12:43.681 "data_size": 65536 00:12:43.681 }, 00:12:43.681 { 00:12:43.681 "name": "BaseBdev3", 00:12:43.681 "uuid": "6669aaa7-10d6-4d0b-a418-91b0712a9192", 00:12:43.681 "is_configured": true, 00:12:43.681 "data_offset": 0, 00:12:43.681 "data_size": 65536 00:12:43.681 } 00:12:43.681 ] 00:12:43.681 }' 00:12:43.681 15:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.681 15:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.939 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.939 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:43.939 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.939 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.939 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.939 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:43.939 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:43.940 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.940 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.940 [2024-12-06 15:39:27.129993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.198 "name": "Existed_Raid", 00:12:44.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.198 "strip_size_kb": 0, 00:12:44.198 "state": "configuring", 00:12:44.198 "raid_level": "raid1", 00:12:44.198 "superblock": false, 00:12:44.198 "num_base_bdevs": 3, 00:12:44.198 "num_base_bdevs_discovered": 1, 00:12:44.198 "num_base_bdevs_operational": 3, 00:12:44.198 "base_bdevs_list": [ 00:12:44.198 { 00:12:44.198 "name": null, 00:12:44.198 "uuid": "4ff41c45-0b0a-4831-87da-f837b485feed", 00:12:44.198 "is_configured": false, 00:12:44.198 "data_offset": 0, 00:12:44.198 "data_size": 65536 00:12:44.198 }, 00:12:44.198 { 00:12:44.198 "name": null, 00:12:44.198 "uuid": "b1358547-9fe9-4331-b1b4-422ab99364a9", 00:12:44.198 "is_configured": false, 00:12:44.198 "data_offset": 0, 00:12:44.198 "data_size": 65536 00:12:44.198 }, 00:12:44.198 { 00:12:44.198 "name": "BaseBdev3", 00:12:44.198 "uuid": "6669aaa7-10d6-4d0b-a418-91b0712a9192", 00:12:44.198 "is_configured": true, 00:12:44.198 "data_offset": 0, 00:12:44.198 "data_size": 65536 00:12:44.198 } 00:12:44.198 ] 00:12:44.198 }' 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.198 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.456 [2024-12-06 15:39:27.687804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.456 "name": "Existed_Raid", 00:12:44.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.456 "strip_size_kb": 0, 00:12:44.456 "state": "configuring", 00:12:44.456 "raid_level": "raid1", 00:12:44.456 "superblock": false, 00:12:44.456 "num_base_bdevs": 3, 00:12:44.456 "num_base_bdevs_discovered": 2, 00:12:44.456 "num_base_bdevs_operational": 3, 00:12:44.456 "base_bdevs_list": [ 00:12:44.456 { 00:12:44.456 "name": null, 00:12:44.456 "uuid": "4ff41c45-0b0a-4831-87da-f837b485feed", 00:12:44.456 "is_configured": false, 00:12:44.456 "data_offset": 0, 00:12:44.456 "data_size": 65536 00:12:44.456 }, 00:12:44.456 { 00:12:44.456 "name": "BaseBdev2", 00:12:44.456 "uuid": "b1358547-9fe9-4331-b1b4-422ab99364a9", 00:12:44.456 "is_configured": true, 00:12:44.456 "data_offset": 0, 00:12:44.456 "data_size": 65536 00:12:44.456 }, 00:12:44.456 { 00:12:44.456 "name": "BaseBdev3", 00:12:44.456 "uuid": "6669aaa7-10d6-4d0b-a418-91b0712a9192", 00:12:44.456 "is_configured": true, 00:12:44.456 "data_offset": 0, 00:12:44.456 "data_size": 65536 00:12:44.456 } 00:12:44.456 ] 00:12:44.456 }' 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.456 15:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4ff41c45-0b0a-4831-87da-f837b485feed 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.024 [2024-12-06 15:39:28.228044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:45.024 [2024-12-06 15:39:28.228111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:45.024 [2024-12-06 15:39:28.228121] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:45.024 [2024-12-06 15:39:28.228417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:45.024 [2024-12-06 15:39:28.228619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:45.024 [2024-12-06 15:39:28.228642] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:45.024 [2024-12-06 15:39:28.228893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.024 NewBaseBdev 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.024 [ 00:12:45.024 { 00:12:45.024 "name": "NewBaseBdev", 00:12:45.024 "aliases": [ 00:12:45.024 "4ff41c45-0b0a-4831-87da-f837b485feed" 00:12:45.024 ], 00:12:45.024 "product_name": "Malloc disk", 00:12:45.024 "block_size": 512, 00:12:45.024 "num_blocks": 65536, 00:12:45.024 "uuid": "4ff41c45-0b0a-4831-87da-f837b485feed", 00:12:45.024 "assigned_rate_limits": { 00:12:45.024 "rw_ios_per_sec": 0, 00:12:45.024 "rw_mbytes_per_sec": 0, 00:12:45.024 "r_mbytes_per_sec": 0, 00:12:45.024 "w_mbytes_per_sec": 0 00:12:45.024 }, 00:12:45.024 "claimed": true, 00:12:45.024 "claim_type": "exclusive_write", 00:12:45.024 "zoned": false, 00:12:45.024 "supported_io_types": { 00:12:45.024 "read": true, 00:12:45.024 "write": true, 00:12:45.024 "unmap": true, 00:12:45.024 "flush": true, 00:12:45.024 "reset": true, 00:12:45.024 "nvme_admin": false, 00:12:45.024 "nvme_io": false, 00:12:45.024 "nvme_io_md": false, 00:12:45.024 "write_zeroes": true, 00:12:45.024 "zcopy": true, 00:12:45.024 "get_zone_info": false, 00:12:45.024 "zone_management": false, 00:12:45.024 "zone_append": false, 00:12:45.024 "compare": false, 00:12:45.024 "compare_and_write": false, 00:12:45.024 "abort": true, 00:12:45.024 "seek_hole": false, 00:12:45.024 "seek_data": false, 00:12:45.024 "copy": true, 00:12:45.024 "nvme_iov_md": false 00:12:45.024 }, 00:12:45.024 "memory_domains": [ 00:12:45.024 { 00:12:45.024 "dma_device_id": "system", 00:12:45.024 "dma_device_type": 1 00:12:45.024 }, 00:12:45.024 { 00:12:45.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.024 "dma_device_type": 2 00:12:45.024 } 00:12:45.024 ], 00:12:45.024 "driver_specific": {} 00:12:45.024 } 00:12:45.024 ] 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.024 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.283 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.283 "name": "Existed_Raid", 00:12:45.283 "uuid": "e88c8faf-498d-4472-a3ab-79b61d08635b", 00:12:45.283 "strip_size_kb": 0, 00:12:45.283 "state": "online", 00:12:45.283 "raid_level": "raid1", 00:12:45.283 "superblock": false, 00:12:45.283 "num_base_bdevs": 3, 00:12:45.283 "num_base_bdevs_discovered": 3, 00:12:45.283 "num_base_bdevs_operational": 3, 00:12:45.283 "base_bdevs_list": [ 00:12:45.283 { 00:12:45.283 "name": "NewBaseBdev", 00:12:45.283 "uuid": "4ff41c45-0b0a-4831-87da-f837b485feed", 00:12:45.283 "is_configured": true, 00:12:45.283 "data_offset": 0, 00:12:45.283 "data_size": 65536 00:12:45.283 }, 00:12:45.283 { 00:12:45.283 "name": "BaseBdev2", 00:12:45.283 "uuid": "b1358547-9fe9-4331-b1b4-422ab99364a9", 00:12:45.283 "is_configured": true, 00:12:45.283 "data_offset": 0, 00:12:45.283 "data_size": 65536 00:12:45.283 }, 00:12:45.283 { 00:12:45.283 "name": "BaseBdev3", 00:12:45.283 "uuid": "6669aaa7-10d6-4d0b-a418-91b0712a9192", 00:12:45.283 "is_configured": true, 00:12:45.283 "data_offset": 0, 00:12:45.283 "data_size": 65536 00:12:45.283 } 00:12:45.283 ] 00:12:45.283 }' 00:12:45.283 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.283 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.541 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:45.541 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:45.541 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:45.541 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:45.541 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:45.541 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:45.541 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:45.541 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:45.541 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.541 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.541 [2024-12-06 15:39:28.699799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.541 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.541 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:45.541 "name": "Existed_Raid", 00:12:45.541 "aliases": [ 00:12:45.541 "e88c8faf-498d-4472-a3ab-79b61d08635b" 00:12:45.541 ], 00:12:45.541 "product_name": "Raid Volume", 00:12:45.541 "block_size": 512, 00:12:45.541 "num_blocks": 65536, 00:12:45.541 "uuid": "e88c8faf-498d-4472-a3ab-79b61d08635b", 00:12:45.541 "assigned_rate_limits": { 00:12:45.541 "rw_ios_per_sec": 0, 00:12:45.541 "rw_mbytes_per_sec": 0, 00:12:45.541 "r_mbytes_per_sec": 0, 00:12:45.541 "w_mbytes_per_sec": 0 00:12:45.541 }, 00:12:45.541 "claimed": false, 00:12:45.541 "zoned": false, 00:12:45.541 "supported_io_types": { 00:12:45.541 "read": true, 00:12:45.541 "write": true, 00:12:45.541 "unmap": false, 00:12:45.541 "flush": false, 00:12:45.541 "reset": true, 00:12:45.541 "nvme_admin": false, 00:12:45.541 "nvme_io": false, 00:12:45.541 "nvme_io_md": false, 00:12:45.541 "write_zeroes": true, 00:12:45.541 "zcopy": false, 00:12:45.541 "get_zone_info": false, 00:12:45.541 "zone_management": false, 00:12:45.541 "zone_append": false, 00:12:45.541 "compare": false, 00:12:45.541 "compare_and_write": false, 00:12:45.541 "abort": false, 00:12:45.541 "seek_hole": false, 00:12:45.541 "seek_data": false, 00:12:45.541 "copy": false, 00:12:45.541 "nvme_iov_md": false 00:12:45.541 }, 00:12:45.541 "memory_domains": [ 00:12:45.541 { 00:12:45.541 "dma_device_id": "system", 00:12:45.541 "dma_device_type": 1 00:12:45.541 }, 00:12:45.541 { 00:12:45.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.541 "dma_device_type": 2 00:12:45.541 }, 00:12:45.541 { 00:12:45.541 "dma_device_id": "system", 00:12:45.541 "dma_device_type": 1 00:12:45.541 }, 00:12:45.541 { 00:12:45.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.541 "dma_device_type": 2 00:12:45.541 }, 00:12:45.541 { 00:12:45.541 "dma_device_id": "system", 00:12:45.541 "dma_device_type": 1 00:12:45.541 }, 00:12:45.541 { 00:12:45.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.541 "dma_device_type": 2 00:12:45.541 } 00:12:45.541 ], 00:12:45.541 "driver_specific": { 00:12:45.541 "raid": { 00:12:45.541 "uuid": "e88c8faf-498d-4472-a3ab-79b61d08635b", 00:12:45.541 "strip_size_kb": 0, 00:12:45.541 "state": "online", 00:12:45.541 "raid_level": "raid1", 00:12:45.541 "superblock": false, 00:12:45.541 "num_base_bdevs": 3, 00:12:45.541 "num_base_bdevs_discovered": 3, 00:12:45.541 "num_base_bdevs_operational": 3, 00:12:45.541 "base_bdevs_list": [ 00:12:45.541 { 00:12:45.541 "name": "NewBaseBdev", 00:12:45.541 "uuid": "4ff41c45-0b0a-4831-87da-f837b485feed", 00:12:45.541 "is_configured": true, 00:12:45.541 "data_offset": 0, 00:12:45.542 "data_size": 65536 00:12:45.542 }, 00:12:45.542 { 00:12:45.542 "name": "BaseBdev2", 00:12:45.542 "uuid": "b1358547-9fe9-4331-b1b4-422ab99364a9", 00:12:45.542 "is_configured": true, 00:12:45.542 "data_offset": 0, 00:12:45.542 "data_size": 65536 00:12:45.542 }, 00:12:45.542 { 00:12:45.542 "name": "BaseBdev3", 00:12:45.542 "uuid": "6669aaa7-10d6-4d0b-a418-91b0712a9192", 00:12:45.542 "is_configured": true, 00:12:45.542 "data_offset": 0, 00:12:45.542 "data_size": 65536 00:12:45.542 } 00:12:45.542 ] 00:12:45.542 } 00:12:45.542 } 00:12:45.542 }' 00:12:45.542 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:45.542 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:45.542 BaseBdev2 00:12:45.542 BaseBdev3' 00:12:45.542 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.542 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:45.542 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.542 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:45.542 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.542 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.542 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.800 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.800 [2024-12-06 15:39:28.967107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:45.801 [2024-12-06 15:39:28.967158] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.801 [2024-12-06 15:39:28.967274] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.801 [2024-12-06 15:39:28.967642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.801 [2024-12-06 15:39:28.967666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:45.801 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.801 15:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67420 00:12:45.801 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67420 ']' 00:12:45.801 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67420 00:12:45.801 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:45.801 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.801 15:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67420 00:12:45.801 killing process with pid 67420 00:12:45.801 15:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.801 15:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.801 15:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67420' 00:12:45.801 15:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67420 00:12:45.801 [2024-12-06 15:39:29.012471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:45.801 15:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67420 00:12:46.405 [2024-12-06 15:39:29.355143] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:47.340 15:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:47.340 00:12:47.340 real 0m10.637s 00:12:47.340 user 0m16.460s 00:12:47.340 sys 0m2.343s 00:12:47.340 15:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.340 15:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.340 ************************************ 00:12:47.340 END TEST raid_state_function_test 00:12:47.340 ************************************ 00:12:47.600 15:39:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:12:47.600 15:39:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:47.600 15:39:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.600 15:39:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:47.600 ************************************ 00:12:47.600 START TEST raid_state_function_test_sb 00:12:47.600 ************************************ 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68047 00:12:47.600 Process raid pid: 68047 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68047' 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68047 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68047 ']' 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.600 15:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.600 [2024-12-06 15:39:30.812331] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:12:47.600 [2024-12-06 15:39:30.812484] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.859 [2024-12-06 15:39:30.997566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.859 [2024-12-06 15:39:31.141849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.118 [2024-12-06 15:39:31.394838] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.118 [2024-12-06 15:39:31.394883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.376 15:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.376 15:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.377 [2024-12-06 15:39:31.646959] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.377 [2024-12-06 15:39:31.647034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.377 [2024-12-06 15:39:31.647056] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.377 [2024-12-06 15:39:31.647070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.377 [2024-12-06 15:39:31.647078] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.377 [2024-12-06 15:39:31.647093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.377 15:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.635 15:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.635 15:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.635 "name": "Existed_Raid", 00:12:48.635 "uuid": "c65572b3-41cc-4460-8037-01b1132ec8db", 00:12:48.635 "strip_size_kb": 0, 00:12:48.635 "state": "configuring", 00:12:48.635 "raid_level": "raid1", 00:12:48.635 "superblock": true, 00:12:48.635 "num_base_bdevs": 3, 00:12:48.635 "num_base_bdevs_discovered": 0, 00:12:48.635 "num_base_bdevs_operational": 3, 00:12:48.635 "base_bdevs_list": [ 00:12:48.635 { 00:12:48.635 "name": "BaseBdev1", 00:12:48.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.635 "is_configured": false, 00:12:48.635 "data_offset": 0, 00:12:48.635 "data_size": 0 00:12:48.635 }, 00:12:48.635 { 00:12:48.635 "name": "BaseBdev2", 00:12:48.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.635 "is_configured": false, 00:12:48.635 "data_offset": 0, 00:12:48.635 "data_size": 0 00:12:48.635 }, 00:12:48.635 { 00:12:48.635 "name": "BaseBdev3", 00:12:48.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.635 "is_configured": false, 00:12:48.635 "data_offset": 0, 00:12:48.635 "data_size": 0 00:12:48.635 } 00:12:48.635 ] 00:12:48.635 }' 00:12:48.635 15:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.635 15:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.894 [2024-12-06 15:39:32.110382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.894 [2024-12-06 15:39:32.110436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.894 [2024-12-06 15:39:32.122376] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.894 [2024-12-06 15:39:32.122439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.894 [2024-12-06 15:39:32.122452] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.894 [2024-12-06 15:39:32.122467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.894 [2024-12-06 15:39:32.122475] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.894 [2024-12-06 15:39:32.122489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.894 [2024-12-06 15:39:32.179334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.894 BaseBdev1 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.894 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.154 [ 00:12:49.154 { 00:12:49.154 "name": "BaseBdev1", 00:12:49.154 "aliases": [ 00:12:49.154 "6f5302dd-d9eb-49d8-899e-6fcc9da5729a" 00:12:49.154 ], 00:12:49.154 "product_name": "Malloc disk", 00:12:49.154 "block_size": 512, 00:12:49.154 "num_blocks": 65536, 00:12:49.154 "uuid": "6f5302dd-d9eb-49d8-899e-6fcc9da5729a", 00:12:49.154 "assigned_rate_limits": { 00:12:49.154 "rw_ios_per_sec": 0, 00:12:49.154 "rw_mbytes_per_sec": 0, 00:12:49.154 "r_mbytes_per_sec": 0, 00:12:49.154 "w_mbytes_per_sec": 0 00:12:49.154 }, 00:12:49.154 "claimed": true, 00:12:49.154 "claim_type": "exclusive_write", 00:12:49.154 "zoned": false, 00:12:49.154 "supported_io_types": { 00:12:49.154 "read": true, 00:12:49.154 "write": true, 00:12:49.154 "unmap": true, 00:12:49.154 "flush": true, 00:12:49.154 "reset": true, 00:12:49.154 "nvme_admin": false, 00:12:49.154 "nvme_io": false, 00:12:49.154 "nvme_io_md": false, 00:12:49.154 "write_zeroes": true, 00:12:49.154 "zcopy": true, 00:12:49.154 "get_zone_info": false, 00:12:49.154 "zone_management": false, 00:12:49.154 "zone_append": false, 00:12:49.154 "compare": false, 00:12:49.154 "compare_and_write": false, 00:12:49.154 "abort": true, 00:12:49.154 "seek_hole": false, 00:12:49.154 "seek_data": false, 00:12:49.154 "copy": true, 00:12:49.154 "nvme_iov_md": false 00:12:49.154 }, 00:12:49.154 "memory_domains": [ 00:12:49.154 { 00:12:49.154 "dma_device_id": "system", 00:12:49.154 "dma_device_type": 1 00:12:49.154 }, 00:12:49.154 { 00:12:49.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.154 "dma_device_type": 2 00:12:49.154 } 00:12:49.154 ], 00:12:49.154 "driver_specific": {} 00:12:49.154 } 00:12:49.154 ] 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.154 "name": "Existed_Raid", 00:12:49.154 "uuid": "266169cd-888c-4057-ace4-15e569147dd1", 00:12:49.154 "strip_size_kb": 0, 00:12:49.154 "state": "configuring", 00:12:49.154 "raid_level": "raid1", 00:12:49.154 "superblock": true, 00:12:49.154 "num_base_bdevs": 3, 00:12:49.154 "num_base_bdevs_discovered": 1, 00:12:49.154 "num_base_bdevs_operational": 3, 00:12:49.154 "base_bdevs_list": [ 00:12:49.154 { 00:12:49.154 "name": "BaseBdev1", 00:12:49.154 "uuid": "6f5302dd-d9eb-49d8-899e-6fcc9da5729a", 00:12:49.154 "is_configured": true, 00:12:49.154 "data_offset": 2048, 00:12:49.154 "data_size": 63488 00:12:49.154 }, 00:12:49.154 { 00:12:49.154 "name": "BaseBdev2", 00:12:49.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.154 "is_configured": false, 00:12:49.154 "data_offset": 0, 00:12:49.154 "data_size": 0 00:12:49.154 }, 00:12:49.154 { 00:12:49.154 "name": "BaseBdev3", 00:12:49.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.154 "is_configured": false, 00:12:49.154 "data_offset": 0, 00:12:49.154 "data_size": 0 00:12:49.154 } 00:12:49.154 ] 00:12:49.154 }' 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.154 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.413 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.414 [2024-12-06 15:39:32.630802] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.414 [2024-12-06 15:39:32.630877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.414 [2024-12-06 15:39:32.642833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.414 [2024-12-06 15:39:32.645368] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.414 [2024-12-06 15:39:32.645419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.414 [2024-12-06 15:39:32.645432] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:49.414 [2024-12-06 15:39:32.645445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.414 "name": "Existed_Raid", 00:12:49.414 "uuid": "352bbb57-9b3e-4be0-b9df-0266fe9ac941", 00:12:49.414 "strip_size_kb": 0, 00:12:49.414 "state": "configuring", 00:12:49.414 "raid_level": "raid1", 00:12:49.414 "superblock": true, 00:12:49.414 "num_base_bdevs": 3, 00:12:49.414 "num_base_bdevs_discovered": 1, 00:12:49.414 "num_base_bdevs_operational": 3, 00:12:49.414 "base_bdevs_list": [ 00:12:49.414 { 00:12:49.414 "name": "BaseBdev1", 00:12:49.414 "uuid": "6f5302dd-d9eb-49d8-899e-6fcc9da5729a", 00:12:49.414 "is_configured": true, 00:12:49.414 "data_offset": 2048, 00:12:49.414 "data_size": 63488 00:12:49.414 }, 00:12:49.414 { 00:12:49.414 "name": "BaseBdev2", 00:12:49.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.414 "is_configured": false, 00:12:49.414 "data_offset": 0, 00:12:49.414 "data_size": 0 00:12:49.414 }, 00:12:49.414 { 00:12:49.414 "name": "BaseBdev3", 00:12:49.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.414 "is_configured": false, 00:12:49.414 "data_offset": 0, 00:12:49.414 "data_size": 0 00:12:49.414 } 00:12:49.414 ] 00:12:49.414 }' 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.414 15:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.983 [2024-12-06 15:39:33.094208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.983 BaseBdev2 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.983 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.983 [ 00:12:49.983 { 00:12:49.983 "name": "BaseBdev2", 00:12:49.983 "aliases": [ 00:12:49.983 "a7d6f260-f4ab-44f4-a819-89bef7339e94" 00:12:49.983 ], 00:12:49.983 "product_name": "Malloc disk", 00:12:49.983 "block_size": 512, 00:12:49.983 "num_blocks": 65536, 00:12:49.983 "uuid": "a7d6f260-f4ab-44f4-a819-89bef7339e94", 00:12:49.983 "assigned_rate_limits": { 00:12:49.983 "rw_ios_per_sec": 0, 00:12:49.983 "rw_mbytes_per_sec": 0, 00:12:49.983 "r_mbytes_per_sec": 0, 00:12:49.983 "w_mbytes_per_sec": 0 00:12:49.983 }, 00:12:49.983 "claimed": true, 00:12:49.983 "claim_type": "exclusive_write", 00:12:49.984 "zoned": false, 00:12:49.984 "supported_io_types": { 00:12:49.984 "read": true, 00:12:49.984 "write": true, 00:12:49.984 "unmap": true, 00:12:49.984 "flush": true, 00:12:49.984 "reset": true, 00:12:49.984 "nvme_admin": false, 00:12:49.984 "nvme_io": false, 00:12:49.984 "nvme_io_md": false, 00:12:49.984 "write_zeroes": true, 00:12:49.984 "zcopy": true, 00:12:49.984 "get_zone_info": false, 00:12:49.984 "zone_management": false, 00:12:49.984 "zone_append": false, 00:12:49.984 "compare": false, 00:12:49.984 "compare_and_write": false, 00:12:49.984 "abort": true, 00:12:49.984 "seek_hole": false, 00:12:49.984 "seek_data": false, 00:12:49.984 "copy": true, 00:12:49.984 "nvme_iov_md": false 00:12:49.984 }, 00:12:49.984 "memory_domains": [ 00:12:49.984 { 00:12:49.984 "dma_device_id": "system", 00:12:49.984 "dma_device_type": 1 00:12:49.984 }, 00:12:49.984 { 00:12:49.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.984 "dma_device_type": 2 00:12:49.984 } 00:12:49.984 ], 00:12:49.984 "driver_specific": {} 00:12:49.984 } 00:12:49.984 ] 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.984 "name": "Existed_Raid", 00:12:49.984 "uuid": "352bbb57-9b3e-4be0-b9df-0266fe9ac941", 00:12:49.984 "strip_size_kb": 0, 00:12:49.984 "state": "configuring", 00:12:49.984 "raid_level": "raid1", 00:12:49.984 "superblock": true, 00:12:49.984 "num_base_bdevs": 3, 00:12:49.984 "num_base_bdevs_discovered": 2, 00:12:49.984 "num_base_bdevs_operational": 3, 00:12:49.984 "base_bdevs_list": [ 00:12:49.984 { 00:12:49.984 "name": "BaseBdev1", 00:12:49.984 "uuid": "6f5302dd-d9eb-49d8-899e-6fcc9da5729a", 00:12:49.984 "is_configured": true, 00:12:49.984 "data_offset": 2048, 00:12:49.984 "data_size": 63488 00:12:49.984 }, 00:12:49.984 { 00:12:49.984 "name": "BaseBdev2", 00:12:49.984 "uuid": "a7d6f260-f4ab-44f4-a819-89bef7339e94", 00:12:49.984 "is_configured": true, 00:12:49.984 "data_offset": 2048, 00:12:49.984 "data_size": 63488 00:12:49.984 }, 00:12:49.984 { 00:12:49.984 "name": "BaseBdev3", 00:12:49.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.984 "is_configured": false, 00:12:49.984 "data_offset": 0, 00:12:49.984 "data_size": 0 00:12:49.984 } 00:12:49.984 ] 00:12:49.984 }' 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.984 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.553 [2024-12-06 15:39:33.627615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.553 [2024-12-06 15:39:33.627924] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:50.553 [2024-12-06 15:39:33.627953] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:50.553 [2024-12-06 15:39:33.628286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:50.553 BaseBdev3 00:12:50.553 [2024-12-06 15:39:33.628466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:50.553 [2024-12-06 15:39:33.628477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:50.553 [2024-12-06 15:39:33.628666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.553 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.553 [ 00:12:50.553 { 00:12:50.553 "name": "BaseBdev3", 00:12:50.553 "aliases": [ 00:12:50.553 "ccee74f3-49f9-4863-a2a4-bdeef342b1d8" 00:12:50.553 ], 00:12:50.553 "product_name": "Malloc disk", 00:12:50.553 "block_size": 512, 00:12:50.553 "num_blocks": 65536, 00:12:50.553 "uuid": "ccee74f3-49f9-4863-a2a4-bdeef342b1d8", 00:12:50.553 "assigned_rate_limits": { 00:12:50.553 "rw_ios_per_sec": 0, 00:12:50.553 "rw_mbytes_per_sec": 0, 00:12:50.553 "r_mbytes_per_sec": 0, 00:12:50.553 "w_mbytes_per_sec": 0 00:12:50.553 }, 00:12:50.553 "claimed": true, 00:12:50.553 "claim_type": "exclusive_write", 00:12:50.553 "zoned": false, 00:12:50.553 "supported_io_types": { 00:12:50.553 "read": true, 00:12:50.553 "write": true, 00:12:50.553 "unmap": true, 00:12:50.553 "flush": true, 00:12:50.553 "reset": true, 00:12:50.554 "nvme_admin": false, 00:12:50.554 "nvme_io": false, 00:12:50.554 "nvme_io_md": false, 00:12:50.554 "write_zeroes": true, 00:12:50.554 "zcopy": true, 00:12:50.554 "get_zone_info": false, 00:12:50.554 "zone_management": false, 00:12:50.554 "zone_append": false, 00:12:50.554 "compare": false, 00:12:50.554 "compare_and_write": false, 00:12:50.554 "abort": true, 00:12:50.554 "seek_hole": false, 00:12:50.554 "seek_data": false, 00:12:50.554 "copy": true, 00:12:50.554 "nvme_iov_md": false 00:12:50.554 }, 00:12:50.554 "memory_domains": [ 00:12:50.554 { 00:12:50.554 "dma_device_id": "system", 00:12:50.554 "dma_device_type": 1 00:12:50.554 }, 00:12:50.554 { 00:12:50.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.554 "dma_device_type": 2 00:12:50.554 } 00:12:50.554 ], 00:12:50.554 "driver_specific": {} 00:12:50.554 } 00:12:50.554 ] 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.554 "name": "Existed_Raid", 00:12:50.554 "uuid": "352bbb57-9b3e-4be0-b9df-0266fe9ac941", 00:12:50.554 "strip_size_kb": 0, 00:12:50.554 "state": "online", 00:12:50.554 "raid_level": "raid1", 00:12:50.554 "superblock": true, 00:12:50.554 "num_base_bdevs": 3, 00:12:50.554 "num_base_bdevs_discovered": 3, 00:12:50.554 "num_base_bdevs_operational": 3, 00:12:50.554 "base_bdevs_list": [ 00:12:50.554 { 00:12:50.554 "name": "BaseBdev1", 00:12:50.554 "uuid": "6f5302dd-d9eb-49d8-899e-6fcc9da5729a", 00:12:50.554 "is_configured": true, 00:12:50.554 "data_offset": 2048, 00:12:50.554 "data_size": 63488 00:12:50.554 }, 00:12:50.554 { 00:12:50.554 "name": "BaseBdev2", 00:12:50.554 "uuid": "a7d6f260-f4ab-44f4-a819-89bef7339e94", 00:12:50.554 "is_configured": true, 00:12:50.554 "data_offset": 2048, 00:12:50.554 "data_size": 63488 00:12:50.554 }, 00:12:50.554 { 00:12:50.554 "name": "BaseBdev3", 00:12:50.554 "uuid": "ccee74f3-49f9-4863-a2a4-bdeef342b1d8", 00:12:50.554 "is_configured": true, 00:12:50.554 "data_offset": 2048, 00:12:50.554 "data_size": 63488 00:12:50.554 } 00:12:50.554 ] 00:12:50.554 }' 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.554 15:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.813 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:50.813 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:50.813 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:50.813 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:50.813 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:50.813 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:51.071 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:51.071 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.071 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:51.071 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.071 [2024-12-06 15:39:34.115406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.071 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.071 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:51.071 "name": "Existed_Raid", 00:12:51.071 "aliases": [ 00:12:51.071 "352bbb57-9b3e-4be0-b9df-0266fe9ac941" 00:12:51.071 ], 00:12:51.071 "product_name": "Raid Volume", 00:12:51.071 "block_size": 512, 00:12:51.071 "num_blocks": 63488, 00:12:51.071 "uuid": "352bbb57-9b3e-4be0-b9df-0266fe9ac941", 00:12:51.071 "assigned_rate_limits": { 00:12:51.071 "rw_ios_per_sec": 0, 00:12:51.071 "rw_mbytes_per_sec": 0, 00:12:51.071 "r_mbytes_per_sec": 0, 00:12:51.071 "w_mbytes_per_sec": 0 00:12:51.071 }, 00:12:51.071 "claimed": false, 00:12:51.071 "zoned": false, 00:12:51.071 "supported_io_types": { 00:12:51.071 "read": true, 00:12:51.071 "write": true, 00:12:51.071 "unmap": false, 00:12:51.071 "flush": false, 00:12:51.071 "reset": true, 00:12:51.071 "nvme_admin": false, 00:12:51.071 "nvme_io": false, 00:12:51.071 "nvme_io_md": false, 00:12:51.071 "write_zeroes": true, 00:12:51.071 "zcopy": false, 00:12:51.071 "get_zone_info": false, 00:12:51.071 "zone_management": false, 00:12:51.071 "zone_append": false, 00:12:51.071 "compare": false, 00:12:51.071 "compare_and_write": false, 00:12:51.071 "abort": false, 00:12:51.071 "seek_hole": false, 00:12:51.071 "seek_data": false, 00:12:51.071 "copy": false, 00:12:51.071 "nvme_iov_md": false 00:12:51.071 }, 00:12:51.071 "memory_domains": [ 00:12:51.071 { 00:12:51.071 "dma_device_id": "system", 00:12:51.071 "dma_device_type": 1 00:12:51.071 }, 00:12:51.071 { 00:12:51.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.071 "dma_device_type": 2 00:12:51.071 }, 00:12:51.071 { 00:12:51.071 "dma_device_id": "system", 00:12:51.071 "dma_device_type": 1 00:12:51.071 }, 00:12:51.071 { 00:12:51.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.071 "dma_device_type": 2 00:12:51.071 }, 00:12:51.071 { 00:12:51.071 "dma_device_id": "system", 00:12:51.071 "dma_device_type": 1 00:12:51.071 }, 00:12:51.071 { 00:12:51.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.071 "dma_device_type": 2 00:12:51.071 } 00:12:51.071 ], 00:12:51.071 "driver_specific": { 00:12:51.071 "raid": { 00:12:51.071 "uuid": "352bbb57-9b3e-4be0-b9df-0266fe9ac941", 00:12:51.072 "strip_size_kb": 0, 00:12:51.072 "state": "online", 00:12:51.072 "raid_level": "raid1", 00:12:51.072 "superblock": true, 00:12:51.072 "num_base_bdevs": 3, 00:12:51.072 "num_base_bdevs_discovered": 3, 00:12:51.072 "num_base_bdevs_operational": 3, 00:12:51.072 "base_bdevs_list": [ 00:12:51.072 { 00:12:51.072 "name": "BaseBdev1", 00:12:51.072 "uuid": "6f5302dd-d9eb-49d8-899e-6fcc9da5729a", 00:12:51.072 "is_configured": true, 00:12:51.072 "data_offset": 2048, 00:12:51.072 "data_size": 63488 00:12:51.072 }, 00:12:51.072 { 00:12:51.072 "name": "BaseBdev2", 00:12:51.072 "uuid": "a7d6f260-f4ab-44f4-a819-89bef7339e94", 00:12:51.072 "is_configured": true, 00:12:51.072 "data_offset": 2048, 00:12:51.072 "data_size": 63488 00:12:51.072 }, 00:12:51.072 { 00:12:51.072 "name": "BaseBdev3", 00:12:51.072 "uuid": "ccee74f3-49f9-4863-a2a4-bdeef342b1d8", 00:12:51.072 "is_configured": true, 00:12:51.072 "data_offset": 2048, 00:12:51.072 "data_size": 63488 00:12:51.072 } 00:12:51.072 ] 00:12:51.072 } 00:12:51.072 } 00:12:51.072 }' 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:51.072 BaseBdev2 00:12:51.072 BaseBdev3' 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.072 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.330 [2024-12-06 15:39:34.386773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.330 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.331 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.331 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.331 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.331 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.331 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.331 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.331 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.331 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.331 "name": "Existed_Raid", 00:12:51.331 "uuid": "352bbb57-9b3e-4be0-b9df-0266fe9ac941", 00:12:51.331 "strip_size_kb": 0, 00:12:51.331 "state": "online", 00:12:51.331 "raid_level": "raid1", 00:12:51.331 "superblock": true, 00:12:51.331 "num_base_bdevs": 3, 00:12:51.331 "num_base_bdevs_discovered": 2, 00:12:51.331 "num_base_bdevs_operational": 2, 00:12:51.331 "base_bdevs_list": [ 00:12:51.331 { 00:12:51.331 "name": null, 00:12:51.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.331 "is_configured": false, 00:12:51.331 "data_offset": 0, 00:12:51.331 "data_size": 63488 00:12:51.331 }, 00:12:51.331 { 00:12:51.331 "name": "BaseBdev2", 00:12:51.331 "uuid": "a7d6f260-f4ab-44f4-a819-89bef7339e94", 00:12:51.331 "is_configured": true, 00:12:51.331 "data_offset": 2048, 00:12:51.331 "data_size": 63488 00:12:51.331 }, 00:12:51.331 { 00:12:51.331 "name": "BaseBdev3", 00:12:51.331 "uuid": "ccee74f3-49f9-4863-a2a4-bdeef342b1d8", 00:12:51.331 "is_configured": true, 00:12:51.331 "data_offset": 2048, 00:12:51.331 "data_size": 63488 00:12:51.331 } 00:12:51.331 ] 00:12:51.331 }' 00:12:51.331 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.331 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.897 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:51.897 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.897 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.897 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.897 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.897 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:51.897 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.897 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:51.897 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:51.897 15:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:51.897 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.897 15:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.897 [2024-12-06 15:39:34.982827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:51.897 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.897 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:51.897 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.897 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.897 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:51.897 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.897 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.897 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.897 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:51.897 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:51.897 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:51.897 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.897 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.897 [2024-12-06 15:39:35.139929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:51.897 [2024-12-06 15:39:35.140060] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.153 [2024-12-06 15:39:35.246520] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.153 [2024-12-06 15:39:35.246589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.153 [2024-12-06 15:39:35.246605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:52.153 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.153 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:52.153 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:52.153 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.153 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.153 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.154 BaseBdev2 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.154 [ 00:12:52.154 { 00:12:52.154 "name": "BaseBdev2", 00:12:52.154 "aliases": [ 00:12:52.154 "1b4fc9c5-683e-43d2-9b57-bf17a4aef799" 00:12:52.154 ], 00:12:52.154 "product_name": "Malloc disk", 00:12:52.154 "block_size": 512, 00:12:52.154 "num_blocks": 65536, 00:12:52.154 "uuid": "1b4fc9c5-683e-43d2-9b57-bf17a4aef799", 00:12:52.154 "assigned_rate_limits": { 00:12:52.154 "rw_ios_per_sec": 0, 00:12:52.154 "rw_mbytes_per_sec": 0, 00:12:52.154 "r_mbytes_per_sec": 0, 00:12:52.154 "w_mbytes_per_sec": 0 00:12:52.154 }, 00:12:52.154 "claimed": false, 00:12:52.154 "zoned": false, 00:12:52.154 "supported_io_types": { 00:12:52.154 "read": true, 00:12:52.154 "write": true, 00:12:52.154 "unmap": true, 00:12:52.154 "flush": true, 00:12:52.154 "reset": true, 00:12:52.154 "nvme_admin": false, 00:12:52.154 "nvme_io": false, 00:12:52.154 "nvme_io_md": false, 00:12:52.154 "write_zeroes": true, 00:12:52.154 "zcopy": true, 00:12:52.154 "get_zone_info": false, 00:12:52.154 "zone_management": false, 00:12:52.154 "zone_append": false, 00:12:52.154 "compare": false, 00:12:52.154 "compare_and_write": false, 00:12:52.154 "abort": true, 00:12:52.154 "seek_hole": false, 00:12:52.154 "seek_data": false, 00:12:52.154 "copy": true, 00:12:52.154 "nvme_iov_md": false 00:12:52.154 }, 00:12:52.154 "memory_domains": [ 00:12:52.154 { 00:12:52.154 "dma_device_id": "system", 00:12:52.154 "dma_device_type": 1 00:12:52.154 }, 00:12:52.154 { 00:12:52.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.154 "dma_device_type": 2 00:12:52.154 } 00:12:52.154 ], 00:12:52.154 "driver_specific": {} 00:12:52.154 } 00:12:52.154 ] 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.154 BaseBdev3 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.154 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.411 [ 00:12:52.411 { 00:12:52.411 "name": "BaseBdev3", 00:12:52.411 "aliases": [ 00:12:52.411 "a5be6c90-bfdf-4c9b-ba1f-c5f57e92aa4c" 00:12:52.411 ], 00:12:52.411 "product_name": "Malloc disk", 00:12:52.411 "block_size": 512, 00:12:52.411 "num_blocks": 65536, 00:12:52.411 "uuid": "a5be6c90-bfdf-4c9b-ba1f-c5f57e92aa4c", 00:12:52.411 "assigned_rate_limits": { 00:12:52.411 "rw_ios_per_sec": 0, 00:12:52.411 "rw_mbytes_per_sec": 0, 00:12:52.411 "r_mbytes_per_sec": 0, 00:12:52.411 "w_mbytes_per_sec": 0 00:12:52.411 }, 00:12:52.411 "claimed": false, 00:12:52.411 "zoned": false, 00:12:52.411 "supported_io_types": { 00:12:52.411 "read": true, 00:12:52.411 "write": true, 00:12:52.411 "unmap": true, 00:12:52.411 "flush": true, 00:12:52.411 "reset": true, 00:12:52.411 "nvme_admin": false, 00:12:52.411 "nvme_io": false, 00:12:52.411 "nvme_io_md": false, 00:12:52.411 "write_zeroes": true, 00:12:52.411 "zcopy": true, 00:12:52.411 "get_zone_info": false, 00:12:52.411 "zone_management": false, 00:12:52.411 "zone_append": false, 00:12:52.411 "compare": false, 00:12:52.411 "compare_and_write": false, 00:12:52.411 "abort": true, 00:12:52.411 "seek_hole": false, 00:12:52.411 "seek_data": false, 00:12:52.411 "copy": true, 00:12:52.411 "nvme_iov_md": false 00:12:52.411 }, 00:12:52.411 "memory_domains": [ 00:12:52.411 { 00:12:52.411 "dma_device_id": "system", 00:12:52.411 "dma_device_type": 1 00:12:52.411 }, 00:12:52.411 { 00:12:52.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.411 "dma_device_type": 2 00:12:52.411 } 00:12:52.411 ], 00:12:52.411 "driver_specific": {} 00:12:52.411 } 00:12:52.411 ] 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.411 [2024-12-06 15:39:35.496095] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:52.411 [2024-12-06 15:39:35.496313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:52.411 [2024-12-06 15:39:35.496471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.411 [2024-12-06 15:39:35.499227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.411 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.412 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.412 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.412 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.412 "name": "Existed_Raid", 00:12:52.412 "uuid": "fd15a9ec-2e5d-4d0a-a9c7-52c5febb273b", 00:12:52.412 "strip_size_kb": 0, 00:12:52.412 "state": "configuring", 00:12:52.412 "raid_level": "raid1", 00:12:52.412 "superblock": true, 00:12:52.412 "num_base_bdevs": 3, 00:12:52.412 "num_base_bdevs_discovered": 2, 00:12:52.412 "num_base_bdevs_operational": 3, 00:12:52.412 "base_bdevs_list": [ 00:12:52.412 { 00:12:52.412 "name": "BaseBdev1", 00:12:52.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.412 "is_configured": false, 00:12:52.412 "data_offset": 0, 00:12:52.412 "data_size": 0 00:12:52.412 }, 00:12:52.412 { 00:12:52.412 "name": "BaseBdev2", 00:12:52.412 "uuid": "1b4fc9c5-683e-43d2-9b57-bf17a4aef799", 00:12:52.412 "is_configured": true, 00:12:52.412 "data_offset": 2048, 00:12:52.412 "data_size": 63488 00:12:52.412 }, 00:12:52.412 { 00:12:52.412 "name": "BaseBdev3", 00:12:52.412 "uuid": "a5be6c90-bfdf-4c9b-ba1f-c5f57e92aa4c", 00:12:52.412 "is_configured": true, 00:12:52.412 "data_offset": 2048, 00:12:52.412 "data_size": 63488 00:12:52.412 } 00:12:52.412 ] 00:12:52.412 }' 00:12:52.412 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.412 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.669 [2024-12-06 15:39:35.935536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.669 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.926 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.927 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.927 "name": "Existed_Raid", 00:12:52.927 "uuid": "fd15a9ec-2e5d-4d0a-a9c7-52c5febb273b", 00:12:52.927 "strip_size_kb": 0, 00:12:52.927 "state": "configuring", 00:12:52.927 "raid_level": "raid1", 00:12:52.927 "superblock": true, 00:12:52.927 "num_base_bdevs": 3, 00:12:52.927 "num_base_bdevs_discovered": 1, 00:12:52.927 "num_base_bdevs_operational": 3, 00:12:52.927 "base_bdevs_list": [ 00:12:52.927 { 00:12:52.927 "name": "BaseBdev1", 00:12:52.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.927 "is_configured": false, 00:12:52.927 "data_offset": 0, 00:12:52.927 "data_size": 0 00:12:52.927 }, 00:12:52.927 { 00:12:52.927 "name": null, 00:12:52.927 "uuid": "1b4fc9c5-683e-43d2-9b57-bf17a4aef799", 00:12:52.927 "is_configured": false, 00:12:52.927 "data_offset": 0, 00:12:52.927 "data_size": 63488 00:12:52.927 }, 00:12:52.927 { 00:12:52.927 "name": "BaseBdev3", 00:12:52.927 "uuid": "a5be6c90-bfdf-4c9b-ba1f-c5f57e92aa4c", 00:12:52.927 "is_configured": true, 00:12:52.927 "data_offset": 2048, 00:12:52.927 "data_size": 63488 00:12:52.927 } 00:12:52.927 ] 00:12:52.927 }' 00:12:52.927 15:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.927 15:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.183 [2024-12-06 15:39:36.468039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.183 BaseBdev1 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.183 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.441 [ 00:12:53.441 { 00:12:53.441 "name": "BaseBdev1", 00:12:53.441 "aliases": [ 00:12:53.441 "9f6944be-c1ee-4c00-b20c-99fc9062f484" 00:12:53.441 ], 00:12:53.441 "product_name": "Malloc disk", 00:12:53.441 "block_size": 512, 00:12:53.441 "num_blocks": 65536, 00:12:53.441 "uuid": "9f6944be-c1ee-4c00-b20c-99fc9062f484", 00:12:53.441 "assigned_rate_limits": { 00:12:53.441 "rw_ios_per_sec": 0, 00:12:53.441 "rw_mbytes_per_sec": 0, 00:12:53.441 "r_mbytes_per_sec": 0, 00:12:53.441 "w_mbytes_per_sec": 0 00:12:53.441 }, 00:12:53.441 "claimed": true, 00:12:53.441 "claim_type": "exclusive_write", 00:12:53.441 "zoned": false, 00:12:53.441 "supported_io_types": { 00:12:53.441 "read": true, 00:12:53.441 "write": true, 00:12:53.441 "unmap": true, 00:12:53.441 "flush": true, 00:12:53.441 "reset": true, 00:12:53.441 "nvme_admin": false, 00:12:53.441 "nvme_io": false, 00:12:53.441 "nvme_io_md": false, 00:12:53.441 "write_zeroes": true, 00:12:53.441 "zcopy": true, 00:12:53.441 "get_zone_info": false, 00:12:53.441 "zone_management": false, 00:12:53.441 "zone_append": false, 00:12:53.441 "compare": false, 00:12:53.441 "compare_and_write": false, 00:12:53.441 "abort": true, 00:12:53.441 "seek_hole": false, 00:12:53.441 "seek_data": false, 00:12:53.441 "copy": true, 00:12:53.441 "nvme_iov_md": false 00:12:53.441 }, 00:12:53.441 "memory_domains": [ 00:12:53.441 { 00:12:53.441 "dma_device_id": "system", 00:12:53.441 "dma_device_type": 1 00:12:53.441 }, 00:12:53.441 { 00:12:53.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.441 "dma_device_type": 2 00:12:53.441 } 00:12:53.441 ], 00:12:53.441 "driver_specific": {} 00:12:53.441 } 00:12:53.441 ] 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.441 "name": "Existed_Raid", 00:12:53.441 "uuid": "fd15a9ec-2e5d-4d0a-a9c7-52c5febb273b", 00:12:53.441 "strip_size_kb": 0, 00:12:53.441 "state": "configuring", 00:12:53.441 "raid_level": "raid1", 00:12:53.441 "superblock": true, 00:12:53.441 "num_base_bdevs": 3, 00:12:53.441 "num_base_bdevs_discovered": 2, 00:12:53.441 "num_base_bdevs_operational": 3, 00:12:53.441 "base_bdevs_list": [ 00:12:53.441 { 00:12:53.441 "name": "BaseBdev1", 00:12:53.441 "uuid": "9f6944be-c1ee-4c00-b20c-99fc9062f484", 00:12:53.441 "is_configured": true, 00:12:53.441 "data_offset": 2048, 00:12:53.441 "data_size": 63488 00:12:53.441 }, 00:12:53.441 { 00:12:53.441 "name": null, 00:12:53.441 "uuid": "1b4fc9c5-683e-43d2-9b57-bf17a4aef799", 00:12:53.441 "is_configured": false, 00:12:53.441 "data_offset": 0, 00:12:53.441 "data_size": 63488 00:12:53.441 }, 00:12:53.441 { 00:12:53.441 "name": "BaseBdev3", 00:12:53.441 "uuid": "a5be6c90-bfdf-4c9b-ba1f-c5f57e92aa4c", 00:12:53.441 "is_configured": true, 00:12:53.441 "data_offset": 2048, 00:12:53.441 "data_size": 63488 00:12:53.441 } 00:12:53.441 ] 00:12:53.441 }' 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.441 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.699 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.699 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.699 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.699 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:53.699 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.956 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:53.956 15:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:53.956 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.956 15:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.956 [2024-12-06 15:39:36.999659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.956 "name": "Existed_Raid", 00:12:53.956 "uuid": "fd15a9ec-2e5d-4d0a-a9c7-52c5febb273b", 00:12:53.956 "strip_size_kb": 0, 00:12:53.956 "state": "configuring", 00:12:53.956 "raid_level": "raid1", 00:12:53.956 "superblock": true, 00:12:53.956 "num_base_bdevs": 3, 00:12:53.956 "num_base_bdevs_discovered": 1, 00:12:53.956 "num_base_bdevs_operational": 3, 00:12:53.956 "base_bdevs_list": [ 00:12:53.956 { 00:12:53.956 "name": "BaseBdev1", 00:12:53.956 "uuid": "9f6944be-c1ee-4c00-b20c-99fc9062f484", 00:12:53.956 "is_configured": true, 00:12:53.956 "data_offset": 2048, 00:12:53.956 "data_size": 63488 00:12:53.956 }, 00:12:53.956 { 00:12:53.956 "name": null, 00:12:53.956 "uuid": "1b4fc9c5-683e-43d2-9b57-bf17a4aef799", 00:12:53.956 "is_configured": false, 00:12:53.956 "data_offset": 0, 00:12:53.956 "data_size": 63488 00:12:53.956 }, 00:12:53.956 { 00:12:53.956 "name": null, 00:12:53.956 "uuid": "a5be6c90-bfdf-4c9b-ba1f-c5f57e92aa4c", 00:12:53.956 "is_configured": false, 00:12:53.956 "data_offset": 0, 00:12:53.956 "data_size": 63488 00:12:53.956 } 00:12:53.956 ] 00:12:53.956 }' 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.956 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.214 [2024-12-06 15:39:37.447094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.214 "name": "Existed_Raid", 00:12:54.214 "uuid": "fd15a9ec-2e5d-4d0a-a9c7-52c5febb273b", 00:12:54.214 "strip_size_kb": 0, 00:12:54.214 "state": "configuring", 00:12:54.214 "raid_level": "raid1", 00:12:54.214 "superblock": true, 00:12:54.214 "num_base_bdevs": 3, 00:12:54.214 "num_base_bdevs_discovered": 2, 00:12:54.214 "num_base_bdevs_operational": 3, 00:12:54.214 "base_bdevs_list": [ 00:12:54.214 { 00:12:54.214 "name": "BaseBdev1", 00:12:54.214 "uuid": "9f6944be-c1ee-4c00-b20c-99fc9062f484", 00:12:54.214 "is_configured": true, 00:12:54.214 "data_offset": 2048, 00:12:54.214 "data_size": 63488 00:12:54.214 }, 00:12:54.214 { 00:12:54.214 "name": null, 00:12:54.214 "uuid": "1b4fc9c5-683e-43d2-9b57-bf17a4aef799", 00:12:54.214 "is_configured": false, 00:12:54.214 "data_offset": 0, 00:12:54.214 "data_size": 63488 00:12:54.214 }, 00:12:54.214 { 00:12:54.214 "name": "BaseBdev3", 00:12:54.214 "uuid": "a5be6c90-bfdf-4c9b-ba1f-c5f57e92aa4c", 00:12:54.214 "is_configured": true, 00:12:54.214 "data_offset": 2048, 00:12:54.214 "data_size": 63488 00:12:54.214 } 00:12:54.214 ] 00:12:54.214 }' 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.214 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.778 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:54.778 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.778 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.778 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.778 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.778 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:54.778 15:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:54.778 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.778 15:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.778 [2024-12-06 15:39:37.930768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.778 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.778 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:54.778 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.778 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.778 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.778 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.778 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.778 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.778 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.778 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.778 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.778 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.778 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.778 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.778 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.035 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.035 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.035 "name": "Existed_Raid", 00:12:55.035 "uuid": "fd15a9ec-2e5d-4d0a-a9c7-52c5febb273b", 00:12:55.035 "strip_size_kb": 0, 00:12:55.035 "state": "configuring", 00:12:55.035 "raid_level": "raid1", 00:12:55.035 "superblock": true, 00:12:55.035 "num_base_bdevs": 3, 00:12:55.035 "num_base_bdevs_discovered": 1, 00:12:55.036 "num_base_bdevs_operational": 3, 00:12:55.036 "base_bdevs_list": [ 00:12:55.036 { 00:12:55.036 "name": null, 00:12:55.036 "uuid": "9f6944be-c1ee-4c00-b20c-99fc9062f484", 00:12:55.036 "is_configured": false, 00:12:55.036 "data_offset": 0, 00:12:55.036 "data_size": 63488 00:12:55.036 }, 00:12:55.036 { 00:12:55.036 "name": null, 00:12:55.036 "uuid": "1b4fc9c5-683e-43d2-9b57-bf17a4aef799", 00:12:55.036 "is_configured": false, 00:12:55.036 "data_offset": 0, 00:12:55.036 "data_size": 63488 00:12:55.036 }, 00:12:55.036 { 00:12:55.036 "name": "BaseBdev3", 00:12:55.036 "uuid": "a5be6c90-bfdf-4c9b-ba1f-c5f57e92aa4c", 00:12:55.036 "is_configured": true, 00:12:55.036 "data_offset": 2048, 00:12:55.036 "data_size": 63488 00:12:55.036 } 00:12:55.036 ] 00:12:55.036 }' 00:12:55.036 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.036 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.294 [2024-12-06 15:39:38.505819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.294 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.295 "name": "Existed_Raid", 00:12:55.295 "uuid": "fd15a9ec-2e5d-4d0a-a9c7-52c5febb273b", 00:12:55.295 "strip_size_kb": 0, 00:12:55.295 "state": "configuring", 00:12:55.295 "raid_level": "raid1", 00:12:55.295 "superblock": true, 00:12:55.295 "num_base_bdevs": 3, 00:12:55.295 "num_base_bdevs_discovered": 2, 00:12:55.295 "num_base_bdevs_operational": 3, 00:12:55.295 "base_bdevs_list": [ 00:12:55.295 { 00:12:55.295 "name": null, 00:12:55.295 "uuid": "9f6944be-c1ee-4c00-b20c-99fc9062f484", 00:12:55.295 "is_configured": false, 00:12:55.295 "data_offset": 0, 00:12:55.295 "data_size": 63488 00:12:55.295 }, 00:12:55.295 { 00:12:55.295 "name": "BaseBdev2", 00:12:55.295 "uuid": "1b4fc9c5-683e-43d2-9b57-bf17a4aef799", 00:12:55.295 "is_configured": true, 00:12:55.295 "data_offset": 2048, 00:12:55.295 "data_size": 63488 00:12:55.295 }, 00:12:55.295 { 00:12:55.295 "name": "BaseBdev3", 00:12:55.295 "uuid": "a5be6c90-bfdf-4c9b-ba1f-c5f57e92aa4c", 00:12:55.295 "is_configured": true, 00:12:55.295 "data_offset": 2048, 00:12:55.295 "data_size": 63488 00:12:55.295 } 00:12:55.295 ] 00:12:55.295 }' 00:12:55.295 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.295 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.871 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.871 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.871 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.871 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:55.871 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.871 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:55.871 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:55.871 15:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.871 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.871 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.871 15:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9f6944be-c1ee-4c00-b20c-99fc9062f484 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.871 [2024-12-06 15:39:39.066586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:55.871 [2024-12-06 15:39:39.066901] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:55.871 [2024-12-06 15:39:39.066918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:55.871 [2024-12-06 15:39:39.067226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:55.871 [2024-12-06 15:39:39.067409] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:55.871 [2024-12-06 15:39:39.067424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:55.871 [2024-12-06 15:39:39.067615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.871 NewBaseBdev 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.871 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.871 [ 00:12:55.871 { 00:12:55.871 "name": "NewBaseBdev", 00:12:55.871 "aliases": [ 00:12:55.871 "9f6944be-c1ee-4c00-b20c-99fc9062f484" 00:12:55.871 ], 00:12:55.871 "product_name": "Malloc disk", 00:12:55.871 "block_size": 512, 00:12:55.871 "num_blocks": 65536, 00:12:55.871 "uuid": "9f6944be-c1ee-4c00-b20c-99fc9062f484", 00:12:55.871 "assigned_rate_limits": { 00:12:55.871 "rw_ios_per_sec": 0, 00:12:55.871 "rw_mbytes_per_sec": 0, 00:12:55.871 "r_mbytes_per_sec": 0, 00:12:55.871 "w_mbytes_per_sec": 0 00:12:55.871 }, 00:12:55.871 "claimed": true, 00:12:55.871 "claim_type": "exclusive_write", 00:12:55.871 "zoned": false, 00:12:55.871 "supported_io_types": { 00:12:55.871 "read": true, 00:12:55.871 "write": true, 00:12:55.871 "unmap": true, 00:12:55.871 "flush": true, 00:12:55.871 "reset": true, 00:12:55.871 "nvme_admin": false, 00:12:55.871 "nvme_io": false, 00:12:55.871 "nvme_io_md": false, 00:12:55.871 "write_zeroes": true, 00:12:55.871 "zcopy": true, 00:12:55.871 "get_zone_info": false, 00:12:55.871 "zone_management": false, 00:12:55.871 "zone_append": false, 00:12:55.871 "compare": false, 00:12:55.871 "compare_and_write": false, 00:12:55.871 "abort": true, 00:12:55.871 "seek_hole": false, 00:12:55.871 "seek_data": false, 00:12:55.871 "copy": true, 00:12:55.871 "nvme_iov_md": false 00:12:55.871 }, 00:12:55.871 "memory_domains": [ 00:12:55.871 { 00:12:55.871 "dma_device_id": "system", 00:12:55.871 "dma_device_type": 1 00:12:55.871 }, 00:12:55.871 { 00:12:55.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.871 "dma_device_type": 2 00:12:55.871 } 00:12:55.871 ], 00:12:55.871 "driver_specific": {} 00:12:55.872 } 00:12:55.872 ] 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.872 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.130 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.130 "name": "Existed_Raid", 00:12:56.130 "uuid": "fd15a9ec-2e5d-4d0a-a9c7-52c5febb273b", 00:12:56.130 "strip_size_kb": 0, 00:12:56.130 "state": "online", 00:12:56.130 "raid_level": "raid1", 00:12:56.130 "superblock": true, 00:12:56.130 "num_base_bdevs": 3, 00:12:56.130 "num_base_bdevs_discovered": 3, 00:12:56.130 "num_base_bdevs_operational": 3, 00:12:56.130 "base_bdevs_list": [ 00:12:56.130 { 00:12:56.130 "name": "NewBaseBdev", 00:12:56.130 "uuid": "9f6944be-c1ee-4c00-b20c-99fc9062f484", 00:12:56.130 "is_configured": true, 00:12:56.130 "data_offset": 2048, 00:12:56.130 "data_size": 63488 00:12:56.130 }, 00:12:56.130 { 00:12:56.130 "name": "BaseBdev2", 00:12:56.130 "uuid": "1b4fc9c5-683e-43d2-9b57-bf17a4aef799", 00:12:56.130 "is_configured": true, 00:12:56.131 "data_offset": 2048, 00:12:56.131 "data_size": 63488 00:12:56.131 }, 00:12:56.131 { 00:12:56.131 "name": "BaseBdev3", 00:12:56.131 "uuid": "a5be6c90-bfdf-4c9b-ba1f-c5f57e92aa4c", 00:12:56.131 "is_configured": true, 00:12:56.131 "data_offset": 2048, 00:12:56.131 "data_size": 63488 00:12:56.131 } 00:12:56.131 ] 00:12:56.131 }' 00:12:56.131 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.131 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.389 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:56.389 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:56.389 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:56.389 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:56.389 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:56.389 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:56.389 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:56.389 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:56.389 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.389 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.389 [2024-12-06 15:39:39.534316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.389 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.389 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:56.389 "name": "Existed_Raid", 00:12:56.389 "aliases": [ 00:12:56.389 "fd15a9ec-2e5d-4d0a-a9c7-52c5febb273b" 00:12:56.389 ], 00:12:56.389 "product_name": "Raid Volume", 00:12:56.389 "block_size": 512, 00:12:56.389 "num_blocks": 63488, 00:12:56.389 "uuid": "fd15a9ec-2e5d-4d0a-a9c7-52c5febb273b", 00:12:56.389 "assigned_rate_limits": { 00:12:56.389 "rw_ios_per_sec": 0, 00:12:56.389 "rw_mbytes_per_sec": 0, 00:12:56.390 "r_mbytes_per_sec": 0, 00:12:56.390 "w_mbytes_per_sec": 0 00:12:56.390 }, 00:12:56.390 "claimed": false, 00:12:56.390 "zoned": false, 00:12:56.390 "supported_io_types": { 00:12:56.390 "read": true, 00:12:56.390 "write": true, 00:12:56.390 "unmap": false, 00:12:56.390 "flush": false, 00:12:56.390 "reset": true, 00:12:56.390 "nvme_admin": false, 00:12:56.390 "nvme_io": false, 00:12:56.390 "nvme_io_md": false, 00:12:56.390 "write_zeroes": true, 00:12:56.390 "zcopy": false, 00:12:56.390 "get_zone_info": false, 00:12:56.390 "zone_management": false, 00:12:56.390 "zone_append": false, 00:12:56.390 "compare": false, 00:12:56.390 "compare_and_write": false, 00:12:56.390 "abort": false, 00:12:56.390 "seek_hole": false, 00:12:56.390 "seek_data": false, 00:12:56.390 "copy": false, 00:12:56.390 "nvme_iov_md": false 00:12:56.390 }, 00:12:56.390 "memory_domains": [ 00:12:56.390 { 00:12:56.390 "dma_device_id": "system", 00:12:56.390 "dma_device_type": 1 00:12:56.390 }, 00:12:56.390 { 00:12:56.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.390 "dma_device_type": 2 00:12:56.390 }, 00:12:56.390 { 00:12:56.390 "dma_device_id": "system", 00:12:56.390 "dma_device_type": 1 00:12:56.390 }, 00:12:56.390 { 00:12:56.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.390 "dma_device_type": 2 00:12:56.390 }, 00:12:56.390 { 00:12:56.390 "dma_device_id": "system", 00:12:56.390 "dma_device_type": 1 00:12:56.390 }, 00:12:56.390 { 00:12:56.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.390 "dma_device_type": 2 00:12:56.390 } 00:12:56.390 ], 00:12:56.390 "driver_specific": { 00:12:56.390 "raid": { 00:12:56.390 "uuid": "fd15a9ec-2e5d-4d0a-a9c7-52c5febb273b", 00:12:56.390 "strip_size_kb": 0, 00:12:56.390 "state": "online", 00:12:56.390 "raid_level": "raid1", 00:12:56.390 "superblock": true, 00:12:56.390 "num_base_bdevs": 3, 00:12:56.390 "num_base_bdevs_discovered": 3, 00:12:56.390 "num_base_bdevs_operational": 3, 00:12:56.390 "base_bdevs_list": [ 00:12:56.390 { 00:12:56.390 "name": "NewBaseBdev", 00:12:56.390 "uuid": "9f6944be-c1ee-4c00-b20c-99fc9062f484", 00:12:56.390 "is_configured": true, 00:12:56.390 "data_offset": 2048, 00:12:56.390 "data_size": 63488 00:12:56.390 }, 00:12:56.390 { 00:12:56.390 "name": "BaseBdev2", 00:12:56.390 "uuid": "1b4fc9c5-683e-43d2-9b57-bf17a4aef799", 00:12:56.390 "is_configured": true, 00:12:56.390 "data_offset": 2048, 00:12:56.390 "data_size": 63488 00:12:56.390 }, 00:12:56.390 { 00:12:56.390 "name": "BaseBdev3", 00:12:56.390 "uuid": "a5be6c90-bfdf-4c9b-ba1f-c5f57e92aa4c", 00:12:56.390 "is_configured": true, 00:12:56.390 "data_offset": 2048, 00:12:56.390 "data_size": 63488 00:12:56.390 } 00:12:56.390 ] 00:12:56.390 } 00:12:56.390 } 00:12:56.390 }' 00:12:56.390 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:56.390 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:56.390 BaseBdev2 00:12:56.390 BaseBdev3' 00:12:56.390 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.390 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:56.390 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.390 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:56.390 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.390 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.390 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.649 [2024-12-06 15:39:39.801692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:56.649 [2024-12-06 15:39:39.801736] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.649 [2024-12-06 15:39:39.801841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.649 [2024-12-06 15:39:39.802216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.649 [2024-12-06 15:39:39.802231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68047 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68047 ']' 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68047 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68047 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.649 killing process with pid 68047 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68047' 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68047 00:12:56.649 [2024-12-06 15:39:39.857010] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:56.649 15:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68047 00:12:56.908 [2024-12-06 15:39:40.194440] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:58.283 15:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:58.283 00:12:58.283 real 0m10.760s 00:12:58.283 user 0m16.741s 00:12:58.283 sys 0m2.238s 00:12:58.283 15:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.283 15:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.283 ************************************ 00:12:58.283 END TEST raid_state_function_test_sb 00:12:58.283 ************************************ 00:12:58.283 15:39:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:58.283 15:39:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:58.283 15:39:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.283 15:39:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:58.283 ************************************ 00:12:58.283 START TEST raid_superblock_test 00:12:58.283 ************************************ 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68668 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68668 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68668 ']' 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.283 15:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.541 [2024-12-06 15:39:41.646085] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:12:58.541 [2024-12-06 15:39:41.646254] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68668 ] 00:12:58.541 [2024-12-06 15:39:41.834296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.799 [2024-12-06 15:39:41.979447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.065 [2024-12-06 15:39:42.229260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.065 [2024-12-06 15:39:42.229351] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.338 malloc1 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.338 [2024-12-06 15:39:42.554953] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:59.338 [2024-12-06 15:39:42.555167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.338 [2024-12-06 15:39:42.555234] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:59.338 [2024-12-06 15:39:42.555329] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.338 [2024-12-06 15:39:42.558161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.338 [2024-12-06 15:39:42.558327] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:59.338 pt1 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.338 malloc2 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.338 [2024-12-06 15:39:42.619148] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:59.338 [2024-12-06 15:39:42.619211] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.338 [2024-12-06 15:39:42.619244] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:59.338 [2024-12-06 15:39:42.619257] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.338 [2024-12-06 15:39:42.622000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.338 [2024-12-06 15:39:42.622039] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:59.338 pt2 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.338 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.596 malloc3 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.596 [2024-12-06 15:39:42.693823] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:59.596 [2024-12-06 15:39:42.695099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.596 [2024-12-06 15:39:42.695146] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:59.596 [2024-12-06 15:39:42.695162] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.596 [2024-12-06 15:39:42.698021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.596 [2024-12-06 15:39:42.698062] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:59.596 pt3 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.596 [2024-12-06 15:39:42.710278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:59.596 [2024-12-06 15:39:42.712746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:59.596 [2024-12-06 15:39:42.712927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:59.596 [2024-12-06 15:39:42.713135] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:59.596 [2024-12-06 15:39:42.713356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:59.596 [2024-12-06 15:39:42.713673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:59.596 [2024-12-06 15:39:42.713954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:59.596 [2024-12-06 15:39:42.714059] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:59.596 [2024-12-06 15:39:42.714365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.596 "name": "raid_bdev1", 00:12:59.596 "uuid": "20272311-52cf-46eb-9ac5-65289f7b42b3", 00:12:59.596 "strip_size_kb": 0, 00:12:59.596 "state": "online", 00:12:59.596 "raid_level": "raid1", 00:12:59.596 "superblock": true, 00:12:59.596 "num_base_bdevs": 3, 00:12:59.596 "num_base_bdevs_discovered": 3, 00:12:59.596 "num_base_bdevs_operational": 3, 00:12:59.596 "base_bdevs_list": [ 00:12:59.596 { 00:12:59.596 "name": "pt1", 00:12:59.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.596 "is_configured": true, 00:12:59.596 "data_offset": 2048, 00:12:59.596 "data_size": 63488 00:12:59.596 }, 00:12:59.596 { 00:12:59.596 "name": "pt2", 00:12:59.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.596 "is_configured": true, 00:12:59.596 "data_offset": 2048, 00:12:59.596 "data_size": 63488 00:12:59.596 }, 00:12:59.596 { 00:12:59.596 "name": "pt3", 00:12:59.596 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.596 "is_configured": true, 00:12:59.596 "data_offset": 2048, 00:12:59.596 "data_size": 63488 00:12:59.596 } 00:12:59.596 ] 00:12:59.596 }' 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.596 15:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.854 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:59.854 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:59.854 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:59.854 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:59.854 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:59.854 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:59.854 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:59.854 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.854 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.854 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.112 [2024-12-06 15:39:43.150149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.112 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.112 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:00.112 "name": "raid_bdev1", 00:13:00.112 "aliases": [ 00:13:00.112 "20272311-52cf-46eb-9ac5-65289f7b42b3" 00:13:00.112 ], 00:13:00.112 "product_name": "Raid Volume", 00:13:00.112 "block_size": 512, 00:13:00.112 "num_blocks": 63488, 00:13:00.112 "uuid": "20272311-52cf-46eb-9ac5-65289f7b42b3", 00:13:00.112 "assigned_rate_limits": { 00:13:00.112 "rw_ios_per_sec": 0, 00:13:00.112 "rw_mbytes_per_sec": 0, 00:13:00.112 "r_mbytes_per_sec": 0, 00:13:00.112 "w_mbytes_per_sec": 0 00:13:00.112 }, 00:13:00.112 "claimed": false, 00:13:00.112 "zoned": false, 00:13:00.112 "supported_io_types": { 00:13:00.112 "read": true, 00:13:00.112 "write": true, 00:13:00.112 "unmap": false, 00:13:00.112 "flush": false, 00:13:00.112 "reset": true, 00:13:00.112 "nvme_admin": false, 00:13:00.112 "nvme_io": false, 00:13:00.112 "nvme_io_md": false, 00:13:00.112 "write_zeroes": true, 00:13:00.112 "zcopy": false, 00:13:00.112 "get_zone_info": false, 00:13:00.112 "zone_management": false, 00:13:00.112 "zone_append": false, 00:13:00.112 "compare": false, 00:13:00.112 "compare_and_write": false, 00:13:00.112 "abort": false, 00:13:00.112 "seek_hole": false, 00:13:00.112 "seek_data": false, 00:13:00.112 "copy": false, 00:13:00.112 "nvme_iov_md": false 00:13:00.112 }, 00:13:00.112 "memory_domains": [ 00:13:00.112 { 00:13:00.112 "dma_device_id": "system", 00:13:00.112 "dma_device_type": 1 00:13:00.112 }, 00:13:00.112 { 00:13:00.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.112 "dma_device_type": 2 00:13:00.112 }, 00:13:00.112 { 00:13:00.112 "dma_device_id": "system", 00:13:00.112 "dma_device_type": 1 00:13:00.112 }, 00:13:00.112 { 00:13:00.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.112 "dma_device_type": 2 00:13:00.112 }, 00:13:00.112 { 00:13:00.112 "dma_device_id": "system", 00:13:00.112 "dma_device_type": 1 00:13:00.112 }, 00:13:00.112 { 00:13:00.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.112 "dma_device_type": 2 00:13:00.112 } 00:13:00.112 ], 00:13:00.112 "driver_specific": { 00:13:00.112 "raid": { 00:13:00.112 "uuid": "20272311-52cf-46eb-9ac5-65289f7b42b3", 00:13:00.112 "strip_size_kb": 0, 00:13:00.112 "state": "online", 00:13:00.112 "raid_level": "raid1", 00:13:00.112 "superblock": true, 00:13:00.112 "num_base_bdevs": 3, 00:13:00.112 "num_base_bdevs_discovered": 3, 00:13:00.112 "num_base_bdevs_operational": 3, 00:13:00.112 "base_bdevs_list": [ 00:13:00.112 { 00:13:00.112 "name": "pt1", 00:13:00.112 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:00.112 "is_configured": true, 00:13:00.113 "data_offset": 2048, 00:13:00.113 "data_size": 63488 00:13:00.113 }, 00:13:00.113 { 00:13:00.113 "name": "pt2", 00:13:00.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.113 "is_configured": true, 00:13:00.113 "data_offset": 2048, 00:13:00.113 "data_size": 63488 00:13:00.113 }, 00:13:00.113 { 00:13:00.113 "name": "pt3", 00:13:00.113 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.113 "is_configured": true, 00:13:00.113 "data_offset": 2048, 00:13:00.113 "data_size": 63488 00:13:00.113 } 00:13:00.113 ] 00:13:00.113 } 00:13:00.113 } 00:13:00.113 }' 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:00.113 pt2 00:13:00.113 pt3' 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.113 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:00.370 [2024-12-06 15:39:43.421995] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=20272311-52cf-46eb-9ac5-65289f7b42b3 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 20272311-52cf-46eb-9ac5-65289f7b42b3 ']' 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.370 [2024-12-06 15:39:43.469689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.370 [2024-12-06 15:39:43.469730] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:00.370 [2024-12-06 15:39:43.469848] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.370 [2024-12-06 15:39:43.469946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.370 [2024-12-06 15:39:43.469959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.370 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.370 [2024-12-06 15:39:43.617688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:00.370 [2024-12-06 15:39:43.620198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:00.370 [2024-12-06 15:39:43.620275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:00.370 [2024-12-06 15:39:43.620338] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:00.370 [2024-12-06 15:39:43.620408] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:00.370 [2024-12-06 15:39:43.620431] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:00.370 [2024-12-06 15:39:43.620454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.370 [2024-12-06 15:39:43.620467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:00.370 request: 00:13:00.370 { 00:13:00.370 "name": "raid_bdev1", 00:13:00.370 "raid_level": "raid1", 00:13:00.370 "base_bdevs": [ 00:13:00.370 "malloc1", 00:13:00.370 "malloc2", 00:13:00.370 "malloc3" 00:13:00.370 ], 00:13:00.370 "superblock": false, 00:13:00.370 "method": "bdev_raid_create", 00:13:00.370 "req_id": 1 00:13:00.370 } 00:13:00.370 Got JSON-RPC error response 00:13:00.370 response: 00:13:00.370 { 00:13:00.370 "code": -17, 00:13:00.370 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:00.370 } 00:13:00.371 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:00.371 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:00.371 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:00.371 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:00.371 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:00.371 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.371 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:00.371 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.371 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.371 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.629 [2024-12-06 15:39:43.677525] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:00.629 [2024-12-06 15:39:43.677739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.629 [2024-12-06 15:39:43.677842] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:00.629 [2024-12-06 15:39:43.677911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.629 [2024-12-06 15:39:43.680887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.629 [2024-12-06 15:39:43.681030] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:00.629 [2024-12-06 15:39:43.681233] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:00.629 [2024-12-06 15:39:43.681388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:00.629 pt1 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.629 "name": "raid_bdev1", 00:13:00.629 "uuid": "20272311-52cf-46eb-9ac5-65289f7b42b3", 00:13:00.629 "strip_size_kb": 0, 00:13:00.629 "state": "configuring", 00:13:00.629 "raid_level": "raid1", 00:13:00.629 "superblock": true, 00:13:00.629 "num_base_bdevs": 3, 00:13:00.629 "num_base_bdevs_discovered": 1, 00:13:00.629 "num_base_bdevs_operational": 3, 00:13:00.629 "base_bdevs_list": [ 00:13:00.629 { 00:13:00.629 "name": "pt1", 00:13:00.629 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:00.629 "is_configured": true, 00:13:00.629 "data_offset": 2048, 00:13:00.629 "data_size": 63488 00:13:00.629 }, 00:13:00.629 { 00:13:00.629 "name": null, 00:13:00.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.629 "is_configured": false, 00:13:00.629 "data_offset": 2048, 00:13:00.629 "data_size": 63488 00:13:00.629 }, 00:13:00.629 { 00:13:00.629 "name": null, 00:13:00.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.629 "is_configured": false, 00:13:00.629 "data_offset": 2048, 00:13:00.629 "data_size": 63488 00:13:00.629 } 00:13:00.629 ] 00:13:00.629 }' 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.629 15:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.887 [2024-12-06 15:39:44.073299] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:00.887 [2024-12-06 15:39:44.073396] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.887 [2024-12-06 15:39:44.073429] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:00.887 [2024-12-06 15:39:44.073441] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.887 [2024-12-06 15:39:44.074056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.887 [2024-12-06 15:39:44.074087] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:00.887 [2024-12-06 15:39:44.074219] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:00.887 [2024-12-06 15:39:44.074251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:00.887 pt2 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.887 [2024-12-06 15:39:44.085325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.887 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.888 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.888 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.888 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.888 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.888 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.888 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.888 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.888 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.888 "name": "raid_bdev1", 00:13:00.888 "uuid": "20272311-52cf-46eb-9ac5-65289f7b42b3", 00:13:00.888 "strip_size_kb": 0, 00:13:00.888 "state": "configuring", 00:13:00.888 "raid_level": "raid1", 00:13:00.888 "superblock": true, 00:13:00.888 "num_base_bdevs": 3, 00:13:00.888 "num_base_bdevs_discovered": 1, 00:13:00.888 "num_base_bdevs_operational": 3, 00:13:00.888 "base_bdevs_list": [ 00:13:00.888 { 00:13:00.888 "name": "pt1", 00:13:00.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:00.888 "is_configured": true, 00:13:00.888 "data_offset": 2048, 00:13:00.888 "data_size": 63488 00:13:00.888 }, 00:13:00.888 { 00:13:00.888 "name": null, 00:13:00.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.888 "is_configured": false, 00:13:00.888 "data_offset": 0, 00:13:00.888 "data_size": 63488 00:13:00.888 }, 00:13:00.888 { 00:13:00.888 "name": null, 00:13:00.888 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.888 "is_configured": false, 00:13:00.888 "data_offset": 2048, 00:13:00.888 "data_size": 63488 00:13:00.888 } 00:13:00.888 ] 00:13:00.888 }' 00:13:00.888 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.888 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.455 [2024-12-06 15:39:44.532691] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:01.455 [2024-12-06 15:39:44.532788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.455 [2024-12-06 15:39:44.532816] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:01.455 [2024-12-06 15:39:44.532831] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.455 [2024-12-06 15:39:44.533441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.455 [2024-12-06 15:39:44.533467] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:01.455 [2024-12-06 15:39:44.533595] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:01.455 [2024-12-06 15:39:44.533643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:01.455 pt2 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.455 [2024-12-06 15:39:44.544703] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:01.455 [2024-12-06 15:39:44.544780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.455 [2024-12-06 15:39:44.544804] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:01.455 [2024-12-06 15:39:44.544821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.455 [2024-12-06 15:39:44.545409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.455 [2024-12-06 15:39:44.545445] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:01.455 [2024-12-06 15:39:44.545562] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:01.455 [2024-12-06 15:39:44.545599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:01.455 [2024-12-06 15:39:44.545764] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:01.455 [2024-12-06 15:39:44.545780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:01.455 [2024-12-06 15:39:44.546079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:01.455 [2024-12-06 15:39:44.546282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:01.455 [2024-12-06 15:39:44.546294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:01.455 [2024-12-06 15:39:44.546477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.455 pt3 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.455 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.455 "name": "raid_bdev1", 00:13:01.455 "uuid": "20272311-52cf-46eb-9ac5-65289f7b42b3", 00:13:01.455 "strip_size_kb": 0, 00:13:01.455 "state": "online", 00:13:01.455 "raid_level": "raid1", 00:13:01.455 "superblock": true, 00:13:01.455 "num_base_bdevs": 3, 00:13:01.455 "num_base_bdevs_discovered": 3, 00:13:01.455 "num_base_bdevs_operational": 3, 00:13:01.455 "base_bdevs_list": [ 00:13:01.455 { 00:13:01.455 "name": "pt1", 00:13:01.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:01.455 "is_configured": true, 00:13:01.455 "data_offset": 2048, 00:13:01.455 "data_size": 63488 00:13:01.455 }, 00:13:01.455 { 00:13:01.455 "name": "pt2", 00:13:01.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.455 "is_configured": true, 00:13:01.455 "data_offset": 2048, 00:13:01.455 "data_size": 63488 00:13:01.455 }, 00:13:01.455 { 00:13:01.455 "name": "pt3", 00:13:01.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.455 "is_configured": true, 00:13:01.455 "data_offset": 2048, 00:13:01.455 "data_size": 63488 00:13:01.456 } 00:13:01.456 ] 00:13:01.456 }' 00:13:01.456 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.456 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.714 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:01.714 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:01.714 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:01.714 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:01.714 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:01.714 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:01.714 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:01.714 15:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.714 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.714 15:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.714 [2024-12-06 15:39:44.977014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:01.973 "name": "raid_bdev1", 00:13:01.973 "aliases": [ 00:13:01.973 "20272311-52cf-46eb-9ac5-65289f7b42b3" 00:13:01.973 ], 00:13:01.973 "product_name": "Raid Volume", 00:13:01.973 "block_size": 512, 00:13:01.973 "num_blocks": 63488, 00:13:01.973 "uuid": "20272311-52cf-46eb-9ac5-65289f7b42b3", 00:13:01.973 "assigned_rate_limits": { 00:13:01.973 "rw_ios_per_sec": 0, 00:13:01.973 "rw_mbytes_per_sec": 0, 00:13:01.973 "r_mbytes_per_sec": 0, 00:13:01.973 "w_mbytes_per_sec": 0 00:13:01.973 }, 00:13:01.973 "claimed": false, 00:13:01.973 "zoned": false, 00:13:01.973 "supported_io_types": { 00:13:01.973 "read": true, 00:13:01.973 "write": true, 00:13:01.973 "unmap": false, 00:13:01.973 "flush": false, 00:13:01.973 "reset": true, 00:13:01.973 "nvme_admin": false, 00:13:01.973 "nvme_io": false, 00:13:01.973 "nvme_io_md": false, 00:13:01.973 "write_zeroes": true, 00:13:01.973 "zcopy": false, 00:13:01.973 "get_zone_info": false, 00:13:01.973 "zone_management": false, 00:13:01.973 "zone_append": false, 00:13:01.973 "compare": false, 00:13:01.973 "compare_and_write": false, 00:13:01.973 "abort": false, 00:13:01.973 "seek_hole": false, 00:13:01.973 "seek_data": false, 00:13:01.973 "copy": false, 00:13:01.973 "nvme_iov_md": false 00:13:01.973 }, 00:13:01.973 "memory_domains": [ 00:13:01.973 { 00:13:01.973 "dma_device_id": "system", 00:13:01.973 "dma_device_type": 1 00:13:01.973 }, 00:13:01.973 { 00:13:01.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.973 "dma_device_type": 2 00:13:01.973 }, 00:13:01.973 { 00:13:01.973 "dma_device_id": "system", 00:13:01.973 "dma_device_type": 1 00:13:01.973 }, 00:13:01.973 { 00:13:01.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.973 "dma_device_type": 2 00:13:01.973 }, 00:13:01.973 { 00:13:01.973 "dma_device_id": "system", 00:13:01.973 "dma_device_type": 1 00:13:01.973 }, 00:13:01.973 { 00:13:01.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.973 "dma_device_type": 2 00:13:01.973 } 00:13:01.973 ], 00:13:01.973 "driver_specific": { 00:13:01.973 "raid": { 00:13:01.973 "uuid": "20272311-52cf-46eb-9ac5-65289f7b42b3", 00:13:01.973 "strip_size_kb": 0, 00:13:01.973 "state": "online", 00:13:01.973 "raid_level": "raid1", 00:13:01.973 "superblock": true, 00:13:01.973 "num_base_bdevs": 3, 00:13:01.973 "num_base_bdevs_discovered": 3, 00:13:01.973 "num_base_bdevs_operational": 3, 00:13:01.973 "base_bdevs_list": [ 00:13:01.973 { 00:13:01.973 "name": "pt1", 00:13:01.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:01.973 "is_configured": true, 00:13:01.973 "data_offset": 2048, 00:13:01.973 "data_size": 63488 00:13:01.973 }, 00:13:01.973 { 00:13:01.973 "name": "pt2", 00:13:01.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.973 "is_configured": true, 00:13:01.973 "data_offset": 2048, 00:13:01.973 "data_size": 63488 00:13:01.973 }, 00:13:01.973 { 00:13:01.973 "name": "pt3", 00:13:01.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.973 "is_configured": true, 00:13:01.973 "data_offset": 2048, 00:13:01.973 "data_size": 63488 00:13:01.973 } 00:13:01.973 ] 00:13:01.973 } 00:13:01.973 } 00:13:01.973 }' 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:01.973 pt2 00:13:01.973 pt3' 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.973 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.974 [2024-12-06 15:39:45.220919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 20272311-52cf-46eb-9ac5-65289f7b42b3 '!=' 20272311-52cf-46eb-9ac5-65289f7b42b3 ']' 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.974 [2024-12-06 15:39:45.260695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.974 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.233 "name": "raid_bdev1", 00:13:02.233 "uuid": "20272311-52cf-46eb-9ac5-65289f7b42b3", 00:13:02.233 "strip_size_kb": 0, 00:13:02.233 "state": "online", 00:13:02.233 "raid_level": "raid1", 00:13:02.233 "superblock": true, 00:13:02.233 "num_base_bdevs": 3, 00:13:02.233 "num_base_bdevs_discovered": 2, 00:13:02.233 "num_base_bdevs_operational": 2, 00:13:02.233 "base_bdevs_list": [ 00:13:02.233 { 00:13:02.233 "name": null, 00:13:02.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.233 "is_configured": false, 00:13:02.233 "data_offset": 0, 00:13:02.233 "data_size": 63488 00:13:02.233 }, 00:13:02.233 { 00:13:02.233 "name": "pt2", 00:13:02.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:02.233 "is_configured": true, 00:13:02.233 "data_offset": 2048, 00:13:02.233 "data_size": 63488 00:13:02.233 }, 00:13:02.233 { 00:13:02.233 "name": "pt3", 00:13:02.233 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:02.233 "is_configured": true, 00:13:02.233 "data_offset": 2048, 00:13:02.233 "data_size": 63488 00:13:02.233 } 00:13:02.233 ] 00:13:02.233 }' 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.233 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.491 [2024-12-06 15:39:45.680681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.491 [2024-12-06 15:39:45.680722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.491 [2024-12-06 15:39:45.680833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.491 [2024-12-06 15:39:45.680909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.491 [2024-12-06 15:39:45.680929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.491 [2024-12-06 15:39:45.764609] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:02.491 [2024-12-06 15:39:45.764787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.491 [2024-12-06 15:39:45.764817] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:02.491 [2024-12-06 15:39:45.764833] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.491 [2024-12-06 15:39:45.767671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.491 [2024-12-06 15:39:45.767714] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:02.491 [2024-12-06 15:39:45.767803] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:02.491 [2024-12-06 15:39:45.767859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:02.491 pt2 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.491 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.750 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.750 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.750 "name": "raid_bdev1", 00:13:02.750 "uuid": "20272311-52cf-46eb-9ac5-65289f7b42b3", 00:13:02.750 "strip_size_kb": 0, 00:13:02.750 "state": "configuring", 00:13:02.750 "raid_level": "raid1", 00:13:02.750 "superblock": true, 00:13:02.750 "num_base_bdevs": 3, 00:13:02.750 "num_base_bdevs_discovered": 1, 00:13:02.750 "num_base_bdevs_operational": 2, 00:13:02.750 "base_bdevs_list": [ 00:13:02.750 { 00:13:02.750 "name": null, 00:13:02.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.750 "is_configured": false, 00:13:02.750 "data_offset": 2048, 00:13:02.750 "data_size": 63488 00:13:02.750 }, 00:13:02.751 { 00:13:02.751 "name": "pt2", 00:13:02.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:02.751 "is_configured": true, 00:13:02.751 "data_offset": 2048, 00:13:02.751 "data_size": 63488 00:13:02.751 }, 00:13:02.751 { 00:13:02.751 "name": null, 00:13:02.751 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:02.751 "is_configured": false, 00:13:02.751 "data_offset": 2048, 00:13:02.751 "data_size": 63488 00:13:02.751 } 00:13:02.751 ] 00:13:02.751 }' 00:13:02.751 15:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.751 15:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.010 [2024-12-06 15:39:46.176127] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:03.010 [2024-12-06 15:39:46.176319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.010 [2024-12-06 15:39:46.176380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:03.010 [2024-12-06 15:39:46.176399] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.010 [2024-12-06 15:39:46.176969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.010 [2024-12-06 15:39:46.177004] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:03.010 [2024-12-06 15:39:46.177109] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:03.010 [2024-12-06 15:39:46.177143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:03.010 [2024-12-06 15:39:46.177296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:03.010 [2024-12-06 15:39:46.177315] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:03.010 [2024-12-06 15:39:46.177637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:03.010 [2024-12-06 15:39:46.177819] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:03.010 [2024-12-06 15:39:46.177831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:03.010 [2024-12-06 15:39:46.177990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.010 pt3 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.010 "name": "raid_bdev1", 00:13:03.010 "uuid": "20272311-52cf-46eb-9ac5-65289f7b42b3", 00:13:03.010 "strip_size_kb": 0, 00:13:03.010 "state": "online", 00:13:03.010 "raid_level": "raid1", 00:13:03.010 "superblock": true, 00:13:03.010 "num_base_bdevs": 3, 00:13:03.010 "num_base_bdevs_discovered": 2, 00:13:03.010 "num_base_bdevs_operational": 2, 00:13:03.010 "base_bdevs_list": [ 00:13:03.010 { 00:13:03.010 "name": null, 00:13:03.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.010 "is_configured": false, 00:13:03.010 "data_offset": 2048, 00:13:03.010 "data_size": 63488 00:13:03.010 }, 00:13:03.010 { 00:13:03.010 "name": "pt2", 00:13:03.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:03.010 "is_configured": true, 00:13:03.010 "data_offset": 2048, 00:13:03.010 "data_size": 63488 00:13:03.010 }, 00:13:03.010 { 00:13:03.010 "name": "pt3", 00:13:03.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:03.010 "is_configured": true, 00:13:03.010 "data_offset": 2048, 00:13:03.010 "data_size": 63488 00:13:03.010 } 00:13:03.010 ] 00:13:03.010 }' 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.010 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.577 [2024-12-06 15:39:46.591667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:03.577 [2024-12-06 15:39:46.591715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.577 [2024-12-06 15:39:46.591827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.577 [2024-12-06 15:39:46.591912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.577 [2024-12-06 15:39:46.591926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.577 [2024-12-06 15:39:46.663693] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:03.577 [2024-12-06 15:39:46.663782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.577 [2024-12-06 15:39:46.663813] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:03.577 [2024-12-06 15:39:46.663825] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.577 [2024-12-06 15:39:46.666820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.577 [2024-12-06 15:39:46.666863] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:03.577 [2024-12-06 15:39:46.666989] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:03.577 [2024-12-06 15:39:46.667052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:03.577 [2024-12-06 15:39:46.667221] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:03.577 [2024-12-06 15:39:46.667234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:03.577 [2024-12-06 15:39:46.667254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:03.577 [2024-12-06 15:39:46.667329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:03.577 pt1 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.577 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.578 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.578 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.578 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.578 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.578 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.578 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.578 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.578 "name": "raid_bdev1", 00:13:03.578 "uuid": "20272311-52cf-46eb-9ac5-65289f7b42b3", 00:13:03.578 "strip_size_kb": 0, 00:13:03.578 "state": "configuring", 00:13:03.578 "raid_level": "raid1", 00:13:03.578 "superblock": true, 00:13:03.578 "num_base_bdevs": 3, 00:13:03.578 "num_base_bdevs_discovered": 1, 00:13:03.578 "num_base_bdevs_operational": 2, 00:13:03.578 "base_bdevs_list": [ 00:13:03.578 { 00:13:03.578 "name": null, 00:13:03.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.578 "is_configured": false, 00:13:03.578 "data_offset": 2048, 00:13:03.578 "data_size": 63488 00:13:03.578 }, 00:13:03.578 { 00:13:03.578 "name": "pt2", 00:13:03.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:03.578 "is_configured": true, 00:13:03.578 "data_offset": 2048, 00:13:03.578 "data_size": 63488 00:13:03.578 }, 00:13:03.578 { 00:13:03.578 "name": null, 00:13:03.578 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:03.578 "is_configured": false, 00:13:03.578 "data_offset": 2048, 00:13:03.578 "data_size": 63488 00:13:03.578 } 00:13:03.578 ] 00:13:03.578 }' 00:13:03.578 15:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.578 15:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.837 [2024-12-06 15:39:47.119105] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:03.837 [2024-12-06 15:39:47.119373] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.837 [2024-12-06 15:39:47.119418] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:03.837 [2024-12-06 15:39:47.119434] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.837 [2024-12-06 15:39:47.120124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.837 [2024-12-06 15:39:47.120147] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:03.837 [2024-12-06 15:39:47.120251] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:03.837 [2024-12-06 15:39:47.120278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:03.837 [2024-12-06 15:39:47.120425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:03.837 [2024-12-06 15:39:47.120436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:03.837 [2024-12-06 15:39:47.120759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:03.837 [2024-12-06 15:39:47.120938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:03.837 [2024-12-06 15:39:47.120956] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:03.837 [2024-12-06 15:39:47.121110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.837 pt3 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.837 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.117 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.117 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.117 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.117 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.117 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.117 "name": "raid_bdev1", 00:13:04.117 "uuid": "20272311-52cf-46eb-9ac5-65289f7b42b3", 00:13:04.117 "strip_size_kb": 0, 00:13:04.117 "state": "online", 00:13:04.117 "raid_level": "raid1", 00:13:04.117 "superblock": true, 00:13:04.117 "num_base_bdevs": 3, 00:13:04.117 "num_base_bdevs_discovered": 2, 00:13:04.117 "num_base_bdevs_operational": 2, 00:13:04.117 "base_bdevs_list": [ 00:13:04.117 { 00:13:04.117 "name": null, 00:13:04.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.117 "is_configured": false, 00:13:04.117 "data_offset": 2048, 00:13:04.117 "data_size": 63488 00:13:04.117 }, 00:13:04.117 { 00:13:04.117 "name": "pt2", 00:13:04.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:04.117 "is_configured": true, 00:13:04.117 "data_offset": 2048, 00:13:04.118 "data_size": 63488 00:13:04.118 }, 00:13:04.118 { 00:13:04.118 "name": "pt3", 00:13:04.118 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:04.118 "is_configured": true, 00:13:04.118 "data_offset": 2048, 00:13:04.118 "data_size": 63488 00:13:04.118 } 00:13:04.118 ] 00:13:04.118 }' 00:13:04.118 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.118 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.376 [2024-12-06 15:39:47.594917] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 20272311-52cf-46eb-9ac5-65289f7b42b3 '!=' 20272311-52cf-46eb-9ac5-65289f7b42b3 ']' 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68668 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68668 ']' 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68668 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68668 00:13:04.376 killing process with pid 68668 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68668' 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68668 00:13:04.376 [2024-12-06 15:39:47.663109] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:04.376 15:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68668 00:13:04.376 [2024-12-06 15:39:47.663232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.376 [2024-12-06 15:39:47.663307] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.376 [2024-12-06 15:39:47.663324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:04.944 [2024-12-06 15:39:47.999487] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:06.324 ************************************ 00:13:06.324 END TEST raid_superblock_test 00:13:06.324 ************************************ 00:13:06.324 15:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:06.324 00:13:06.324 real 0m7.712s 00:13:06.324 user 0m11.782s 00:13:06.324 sys 0m1.689s 00:13:06.324 15:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.324 15:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.324 15:39:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:13:06.324 15:39:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:06.324 15:39:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.324 15:39:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:06.324 ************************************ 00:13:06.324 START TEST raid_read_error_test 00:13:06.324 ************************************ 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:06.324 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:06.325 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:06.325 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:06.325 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:06.325 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:06.325 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yfJMzIITUm 00:13:06.325 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69115 00:13:06.325 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:06.325 15:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69115 00:13:06.325 15:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69115 ']' 00:13:06.325 15:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.325 15:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.325 15:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.325 15:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.325 15:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.325 [2024-12-06 15:39:49.463925] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:13:06.325 [2024-12-06 15:39:49.464326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69115 ] 00:13:06.584 [2024-12-06 15:39:49.652810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.584 [2024-12-06 15:39:49.789756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.842 [2024-12-06 15:39:50.018663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.842 [2024-12-06 15:39:50.018877] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.102 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.102 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:07.102 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:07.102 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:07.102 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.102 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.102 BaseBdev1_malloc 00:13:07.102 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.102 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:07.102 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.102 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.102 true 00:13:07.102 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.102 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:07.102 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.102 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.102 [2024-12-06 15:39:50.394050] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:07.102 [2024-12-06 15:39:50.394249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.102 [2024-12-06 15:39:50.394283] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:07.102 [2024-12-06 15:39:50.394299] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.362 [2024-12-06 15:39:50.397007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.362 [2024-12-06 15:39:50.397053] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:07.362 BaseBdev1 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.362 BaseBdev2_malloc 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.362 true 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.362 [2024-12-06 15:39:50.472027] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:07.362 [2024-12-06 15:39:50.472204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.362 [2024-12-06 15:39:50.472232] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:07.362 [2024-12-06 15:39:50.472248] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.362 [2024-12-06 15:39:50.474947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.362 [2024-12-06 15:39:50.474992] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:07.362 BaseBdev2 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.362 BaseBdev3_malloc 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.362 true 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.362 [2024-12-06 15:39:50.561130] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:07.362 [2024-12-06 15:39:50.561190] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.362 [2024-12-06 15:39:50.561211] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:07.362 [2024-12-06 15:39:50.561226] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.362 [2024-12-06 15:39:50.563960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.362 [2024-12-06 15:39:50.564003] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:07.362 BaseBdev3 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.362 [2024-12-06 15:39:50.573198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.362 [2024-12-06 15:39:50.575711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.362 [2024-12-06 15:39:50.575787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:07.362 [2024-12-06 15:39:50.576001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:07.362 [2024-12-06 15:39:50.576015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:07.362 [2024-12-06 15:39:50.576296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:07.362 [2024-12-06 15:39:50.576470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:07.362 [2024-12-06 15:39:50.576484] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:07.362 [2024-12-06 15:39:50.576779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.362 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.363 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.363 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.363 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.363 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.363 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.363 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.363 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.363 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.363 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.363 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.363 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.363 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.363 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.363 "name": "raid_bdev1", 00:13:07.363 "uuid": "babe4abb-9397-496d-ada8-c44af770c082", 00:13:07.363 "strip_size_kb": 0, 00:13:07.363 "state": "online", 00:13:07.363 "raid_level": "raid1", 00:13:07.363 "superblock": true, 00:13:07.363 "num_base_bdevs": 3, 00:13:07.363 "num_base_bdevs_discovered": 3, 00:13:07.363 "num_base_bdevs_operational": 3, 00:13:07.363 "base_bdevs_list": [ 00:13:07.363 { 00:13:07.363 "name": "BaseBdev1", 00:13:07.363 "uuid": "8a0f03de-8482-58f5-8cf7-49ee5a3dbb75", 00:13:07.363 "is_configured": true, 00:13:07.363 "data_offset": 2048, 00:13:07.363 "data_size": 63488 00:13:07.363 }, 00:13:07.363 { 00:13:07.363 "name": "BaseBdev2", 00:13:07.363 "uuid": "b77d3a0a-f033-508d-9ac9-3197853ac9f7", 00:13:07.363 "is_configured": true, 00:13:07.363 "data_offset": 2048, 00:13:07.363 "data_size": 63488 00:13:07.363 }, 00:13:07.363 { 00:13:07.363 "name": "BaseBdev3", 00:13:07.363 "uuid": "69a695bc-53d1-5ae8-b488-ead3c028cbc3", 00:13:07.363 "is_configured": true, 00:13:07.363 "data_offset": 2048, 00:13:07.363 "data_size": 63488 00:13:07.363 } 00:13:07.363 ] 00:13:07.363 }' 00:13:07.363 15:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.363 15:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.931 15:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:07.931 15:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:07.931 [2024-12-06 15:39:51.098283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.869 "name": "raid_bdev1", 00:13:08.869 "uuid": "babe4abb-9397-496d-ada8-c44af770c082", 00:13:08.869 "strip_size_kb": 0, 00:13:08.869 "state": "online", 00:13:08.869 "raid_level": "raid1", 00:13:08.869 "superblock": true, 00:13:08.869 "num_base_bdevs": 3, 00:13:08.869 "num_base_bdevs_discovered": 3, 00:13:08.869 "num_base_bdevs_operational": 3, 00:13:08.869 "base_bdevs_list": [ 00:13:08.869 { 00:13:08.869 "name": "BaseBdev1", 00:13:08.869 "uuid": "8a0f03de-8482-58f5-8cf7-49ee5a3dbb75", 00:13:08.869 "is_configured": true, 00:13:08.869 "data_offset": 2048, 00:13:08.869 "data_size": 63488 00:13:08.869 }, 00:13:08.869 { 00:13:08.869 "name": "BaseBdev2", 00:13:08.869 "uuid": "b77d3a0a-f033-508d-9ac9-3197853ac9f7", 00:13:08.869 "is_configured": true, 00:13:08.869 "data_offset": 2048, 00:13:08.869 "data_size": 63488 00:13:08.869 }, 00:13:08.869 { 00:13:08.869 "name": "BaseBdev3", 00:13:08.869 "uuid": "69a695bc-53d1-5ae8-b488-ead3c028cbc3", 00:13:08.869 "is_configured": true, 00:13:08.869 "data_offset": 2048, 00:13:08.869 "data_size": 63488 00:13:08.869 } 00:13:08.869 ] 00:13:08.869 }' 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.869 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.229 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:09.229 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.229 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.229 [2024-12-06 15:39:52.447455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:09.229 [2024-12-06 15:39:52.447494] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.229 [2024-12-06 15:39:52.450351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.229 [2024-12-06 15:39:52.450413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.229 [2024-12-06 15:39:52.450541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.229 [2024-12-06 15:39:52.450556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:09.229 { 00:13:09.229 "results": [ 00:13:09.229 { 00:13:09.229 "job": "raid_bdev1", 00:13:09.229 "core_mask": "0x1", 00:13:09.229 "workload": "randrw", 00:13:09.229 "percentage": 50, 00:13:09.229 "status": "finished", 00:13:09.229 "queue_depth": 1, 00:13:09.229 "io_size": 131072, 00:13:09.229 "runtime": 1.349063, 00:13:09.229 "iops": 10396.104555532247, 00:13:09.229 "mibps": 1299.513069441531, 00:13:09.229 "io_failed": 0, 00:13:09.229 "io_timeout": 0, 00:13:09.229 "avg_latency_us": 93.50547481906234, 00:13:09.229 "min_latency_us": 25.188755020080322, 00:13:09.229 "max_latency_us": 1559.4409638554216 00:13:09.229 } 00:13:09.229 ], 00:13:09.229 "core_count": 1 00:13:09.229 } 00:13:09.229 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.229 15:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69115 00:13:09.229 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69115 ']' 00:13:09.229 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69115 00:13:09.229 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:09.229 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.229 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69115 00:13:09.494 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.494 killing process with pid 69115 00:13:09.494 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.494 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69115' 00:13:09.494 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69115 00:13:09.494 [2024-12-06 15:39:52.500714] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:09.494 15:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69115 00:13:09.494 [2024-12-06 15:39:52.759042] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:10.872 15:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yfJMzIITUm 00:13:10.872 15:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:10.872 15:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:10.872 15:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:10.872 15:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:10.872 15:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:10.872 15:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:10.872 15:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:10.872 00:13:10.872 real 0m4.757s 00:13:10.872 user 0m5.452s 00:13:10.872 sys 0m0.752s 00:13:10.872 15:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.872 15:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.872 ************************************ 00:13:10.872 END TEST raid_read_error_test 00:13:10.872 ************************************ 00:13:10.872 15:39:54 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:13:10.872 15:39:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:10.872 15:39:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.872 15:39:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.131 ************************************ 00:13:11.131 START TEST raid_write_error_test 00:13:11.131 ************************************ 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:11.131 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:11.132 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:11.132 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.U2ghKxQz70 00:13:11.132 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69261 00:13:11.132 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69261 00:13:11.132 15:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:11.132 15:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69261 ']' 00:13:11.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.132 15:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.132 15:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.132 15:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.132 15:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.132 15:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.132 [2024-12-06 15:39:54.294874] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:13:11.132 [2024-12-06 15:39:54.295019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69261 ] 00:13:11.391 [2024-12-06 15:39:54.480976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.391 [2024-12-06 15:39:54.613924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.650 [2024-12-06 15:39:54.867604] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.650 [2024-12-06 15:39:54.867963] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.909 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.909 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:11.909 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:11.909 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:11.909 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.909 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.909 BaseBdev1_malloc 00:13:11.909 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.909 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:11.909 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.909 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.909 true 00:13:11.909 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.909 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:11.909 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.909 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.909 [2024-12-06 15:39:55.196742] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:11.909 [2024-12-06 15:39:55.196961] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.909 [2024-12-06 15:39:55.197001] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:11.909 [2024-12-06 15:39:55.197018] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.909 [2024-12-06 15:39:55.199970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.909 [2024-12-06 15:39:55.200025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:12.168 BaseBdev1 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.168 BaseBdev2_malloc 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.168 true 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.168 [2024-12-06 15:39:55.273991] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:12.168 [2024-12-06 15:39:55.274076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.168 [2024-12-06 15:39:55.274101] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:12.168 [2024-12-06 15:39:55.274117] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.168 [2024-12-06 15:39:55.277048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.168 [2024-12-06 15:39:55.277226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:12.168 BaseBdev2 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.168 BaseBdev3_malloc 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.168 true 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.168 [2024-12-06 15:39:55.361010] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:12.168 [2024-12-06 15:39:55.361209] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.168 [2024-12-06 15:39:55.361247] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:12.168 [2024-12-06 15:39:55.361264] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.168 [2024-12-06 15:39:55.364157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.168 [2024-12-06 15:39:55.364204] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:12.168 BaseBdev3 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.168 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.168 [2024-12-06 15:39:55.373146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:12.168 [2024-12-06 15:39:55.375953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:12.168 [2024-12-06 15:39:55.376209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:12.168 [2024-12-06 15:39:55.376482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:12.168 [2024-12-06 15:39:55.376498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:12.168 [2024-12-06 15:39:55.376883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:12.168 [2024-12-06 15:39:55.377084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:12.169 [2024-12-06 15:39:55.377098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:12.169 [2024-12-06 15:39:55.377378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.169 "name": "raid_bdev1", 00:13:12.169 "uuid": "3c0d2a08-d977-49a0-a5ca-1936a3d36d3c", 00:13:12.169 "strip_size_kb": 0, 00:13:12.169 "state": "online", 00:13:12.169 "raid_level": "raid1", 00:13:12.169 "superblock": true, 00:13:12.169 "num_base_bdevs": 3, 00:13:12.169 "num_base_bdevs_discovered": 3, 00:13:12.169 "num_base_bdevs_operational": 3, 00:13:12.169 "base_bdevs_list": [ 00:13:12.169 { 00:13:12.169 "name": "BaseBdev1", 00:13:12.169 "uuid": "e2117732-b08b-5360-a246-62b18a4a7927", 00:13:12.169 "is_configured": true, 00:13:12.169 "data_offset": 2048, 00:13:12.169 "data_size": 63488 00:13:12.169 }, 00:13:12.169 { 00:13:12.169 "name": "BaseBdev2", 00:13:12.169 "uuid": "97f490de-441c-54cf-8bfc-09fa7c6ada02", 00:13:12.169 "is_configured": true, 00:13:12.169 "data_offset": 2048, 00:13:12.169 "data_size": 63488 00:13:12.169 }, 00:13:12.169 { 00:13:12.169 "name": "BaseBdev3", 00:13:12.169 "uuid": "84d9e006-0b98-5f18-ab94-b48df177b4fb", 00:13:12.169 "is_configured": true, 00:13:12.169 "data_offset": 2048, 00:13:12.169 "data_size": 63488 00:13:12.169 } 00:13:12.169 ] 00:13:12.169 }' 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.169 15:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.735 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:12.735 15:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:12.735 [2024-12-06 15:39:55.882310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.670 [2024-12-06 15:39:56.802059] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:13.670 [2024-12-06 15:39:56.802262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:13.670 [2024-12-06 15:39:56.802570] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.670 "name": "raid_bdev1", 00:13:13.670 "uuid": "3c0d2a08-d977-49a0-a5ca-1936a3d36d3c", 00:13:13.670 "strip_size_kb": 0, 00:13:13.670 "state": "online", 00:13:13.670 "raid_level": "raid1", 00:13:13.670 "superblock": true, 00:13:13.670 "num_base_bdevs": 3, 00:13:13.670 "num_base_bdevs_discovered": 2, 00:13:13.670 "num_base_bdevs_operational": 2, 00:13:13.670 "base_bdevs_list": [ 00:13:13.670 { 00:13:13.670 "name": null, 00:13:13.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.670 "is_configured": false, 00:13:13.670 "data_offset": 0, 00:13:13.670 "data_size": 63488 00:13:13.670 }, 00:13:13.670 { 00:13:13.670 "name": "BaseBdev2", 00:13:13.670 "uuid": "97f490de-441c-54cf-8bfc-09fa7c6ada02", 00:13:13.670 "is_configured": true, 00:13:13.670 "data_offset": 2048, 00:13:13.670 "data_size": 63488 00:13:13.670 }, 00:13:13.670 { 00:13:13.670 "name": "BaseBdev3", 00:13:13.670 "uuid": "84d9e006-0b98-5f18-ab94-b48df177b4fb", 00:13:13.670 "is_configured": true, 00:13:13.670 "data_offset": 2048, 00:13:13.670 "data_size": 63488 00:13:13.670 } 00:13:13.670 ] 00:13:13.670 }' 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.670 15:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.236 15:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:14.236 15:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.236 15:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.236 [2024-12-06 15:39:57.265976] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:14.236 [2024-12-06 15:39:57.266242] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.236 [2024-12-06 15:39:57.269367] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.236 [2024-12-06 15:39:57.269566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.236 [2024-12-06 15:39:57.269703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.236 [2024-12-06 15:39:57.269880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:14.236 { 00:13:14.236 "results": [ 00:13:14.236 { 00:13:14.236 "job": "raid_bdev1", 00:13:14.236 "core_mask": "0x1", 00:13:14.236 "workload": "randrw", 00:13:14.236 "percentage": 50, 00:13:14.236 "status": "finished", 00:13:14.236 "queue_depth": 1, 00:13:14.236 "io_size": 131072, 00:13:14.236 "runtime": 1.383687, 00:13:14.236 "iops": 11734.590264994902, 00:13:14.236 "mibps": 1466.8237831243628, 00:13:14.236 "io_failed": 0, 00:13:14.236 "io_timeout": 0, 00:13:14.236 "avg_latency_us": 82.50648949187152, 00:13:14.236 "min_latency_us": 24.983132530120482, 00:13:14.236 "max_latency_us": 1408.1028112449799 00:13:14.236 } 00:13:14.236 ], 00:13:14.236 "core_count": 1 00:13:14.236 } 00:13:14.236 15:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.236 15:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69261 00:13:14.236 15:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69261 ']' 00:13:14.236 15:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69261 00:13:14.236 15:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:14.236 15:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.236 15:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69261 00:13:14.236 15:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.236 killing process with pid 69261 00:13:14.236 15:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.236 15:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69261' 00:13:14.236 15:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69261 00:13:14.236 [2024-12-06 15:39:57.314429] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.236 15:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69261 00:13:14.534 [2024-12-06 15:39:57.574324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:15.913 15:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.U2ghKxQz70 00:13:15.913 15:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:15.913 15:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:15.913 15:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:15.913 15:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:15.913 15:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:15.913 15:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:15.913 15:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:15.913 00:13:15.913 real 0m4.738s 00:13:15.913 user 0m5.401s 00:13:15.913 sys 0m0.755s 00:13:15.913 15:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.913 15:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.913 ************************************ 00:13:15.913 END TEST raid_write_error_test 00:13:15.913 ************************************ 00:13:15.913 15:39:58 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:15.913 15:39:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:15.913 15:39:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:13:15.913 15:39:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:15.913 15:39:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.913 15:39:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:15.913 ************************************ 00:13:15.913 START TEST raid_state_function_test 00:13:15.913 ************************************ 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:15.913 15:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:15.913 Process raid pid: 69404 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69404 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69404' 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69404 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69404 ']' 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.913 15:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.913 [2024-12-06 15:39:59.109715] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:13:15.913 [2024-12-06 15:39:59.109881] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.173 [2024-12-06 15:39:59.297989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.173 [2024-12-06 15:39:59.447412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.433 [2024-12-06 15:39:59.688973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.433 [2024-12-06 15:39:59.689029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.002 [2024-12-06 15:40:00.053678] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:17.002 [2024-12-06 15:40:00.053764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:17.002 [2024-12-06 15:40:00.053779] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:17.002 [2024-12-06 15:40:00.053793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:17.002 [2024-12-06 15:40:00.053801] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:17.002 [2024-12-06 15:40:00.053813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:17.002 [2024-12-06 15:40:00.053821] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:17.002 [2024-12-06 15:40:00.053833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.002 "name": "Existed_Raid", 00:13:17.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.002 "strip_size_kb": 64, 00:13:17.002 "state": "configuring", 00:13:17.002 "raid_level": "raid0", 00:13:17.002 "superblock": false, 00:13:17.002 "num_base_bdevs": 4, 00:13:17.002 "num_base_bdevs_discovered": 0, 00:13:17.002 "num_base_bdevs_operational": 4, 00:13:17.002 "base_bdevs_list": [ 00:13:17.002 { 00:13:17.002 "name": "BaseBdev1", 00:13:17.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.002 "is_configured": false, 00:13:17.002 "data_offset": 0, 00:13:17.002 "data_size": 0 00:13:17.002 }, 00:13:17.002 { 00:13:17.002 "name": "BaseBdev2", 00:13:17.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.002 "is_configured": false, 00:13:17.002 "data_offset": 0, 00:13:17.002 "data_size": 0 00:13:17.002 }, 00:13:17.002 { 00:13:17.002 "name": "BaseBdev3", 00:13:17.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.002 "is_configured": false, 00:13:17.002 "data_offset": 0, 00:13:17.002 "data_size": 0 00:13:17.002 }, 00:13:17.002 { 00:13:17.002 "name": "BaseBdev4", 00:13:17.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.002 "is_configured": false, 00:13:17.002 "data_offset": 0, 00:13:17.002 "data_size": 0 00:13:17.002 } 00:13:17.002 ] 00:13:17.002 }' 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.002 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.262 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:17.262 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.262 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.262 [2024-12-06 15:40:00.488983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:17.262 [2024-12-06 15:40:00.489037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:17.262 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.262 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:17.262 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.262 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.262 [2024-12-06 15:40:00.500930] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:17.262 [2024-12-06 15:40:00.501116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:17.262 [2024-12-06 15:40:00.501212] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:17.262 [2024-12-06 15:40:00.501258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:17.262 [2024-12-06 15:40:00.501288] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:17.262 [2024-12-06 15:40:00.501304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:17.262 [2024-12-06 15:40:00.501312] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:17.262 [2024-12-06 15:40:00.501325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:17.262 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.262 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:17.262 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.262 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.262 [2024-12-06 15:40:00.553768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.262 BaseBdev1 00:13:17.522 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.522 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:17.522 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:17.522 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.522 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:17.522 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.522 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.522 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.522 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.522 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.522 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.522 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:17.522 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.522 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.522 [ 00:13:17.522 { 00:13:17.522 "name": "BaseBdev1", 00:13:17.522 "aliases": [ 00:13:17.522 "65b3d986-b17b-40c5-a314-da4dbd7f67ff" 00:13:17.522 ], 00:13:17.522 "product_name": "Malloc disk", 00:13:17.522 "block_size": 512, 00:13:17.522 "num_blocks": 65536, 00:13:17.522 "uuid": "65b3d986-b17b-40c5-a314-da4dbd7f67ff", 00:13:17.522 "assigned_rate_limits": { 00:13:17.522 "rw_ios_per_sec": 0, 00:13:17.522 "rw_mbytes_per_sec": 0, 00:13:17.522 "r_mbytes_per_sec": 0, 00:13:17.522 "w_mbytes_per_sec": 0 00:13:17.522 }, 00:13:17.522 "claimed": true, 00:13:17.522 "claim_type": "exclusive_write", 00:13:17.522 "zoned": false, 00:13:17.522 "supported_io_types": { 00:13:17.522 "read": true, 00:13:17.522 "write": true, 00:13:17.522 "unmap": true, 00:13:17.522 "flush": true, 00:13:17.522 "reset": true, 00:13:17.522 "nvme_admin": false, 00:13:17.522 "nvme_io": false, 00:13:17.522 "nvme_io_md": false, 00:13:17.523 "write_zeroes": true, 00:13:17.523 "zcopy": true, 00:13:17.523 "get_zone_info": false, 00:13:17.523 "zone_management": false, 00:13:17.523 "zone_append": false, 00:13:17.523 "compare": false, 00:13:17.523 "compare_and_write": false, 00:13:17.523 "abort": true, 00:13:17.523 "seek_hole": false, 00:13:17.523 "seek_data": false, 00:13:17.523 "copy": true, 00:13:17.523 "nvme_iov_md": false 00:13:17.523 }, 00:13:17.523 "memory_domains": [ 00:13:17.523 { 00:13:17.523 "dma_device_id": "system", 00:13:17.523 "dma_device_type": 1 00:13:17.523 }, 00:13:17.523 { 00:13:17.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.523 "dma_device_type": 2 00:13:17.523 } 00:13:17.523 ], 00:13:17.523 "driver_specific": {} 00:13:17.523 } 00:13:17.523 ] 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.523 "name": "Existed_Raid", 00:13:17.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.523 "strip_size_kb": 64, 00:13:17.523 "state": "configuring", 00:13:17.523 "raid_level": "raid0", 00:13:17.523 "superblock": false, 00:13:17.523 "num_base_bdevs": 4, 00:13:17.523 "num_base_bdevs_discovered": 1, 00:13:17.523 "num_base_bdevs_operational": 4, 00:13:17.523 "base_bdevs_list": [ 00:13:17.523 { 00:13:17.523 "name": "BaseBdev1", 00:13:17.523 "uuid": "65b3d986-b17b-40c5-a314-da4dbd7f67ff", 00:13:17.523 "is_configured": true, 00:13:17.523 "data_offset": 0, 00:13:17.523 "data_size": 65536 00:13:17.523 }, 00:13:17.523 { 00:13:17.523 "name": "BaseBdev2", 00:13:17.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.523 "is_configured": false, 00:13:17.523 "data_offset": 0, 00:13:17.523 "data_size": 0 00:13:17.523 }, 00:13:17.523 { 00:13:17.523 "name": "BaseBdev3", 00:13:17.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.523 "is_configured": false, 00:13:17.523 "data_offset": 0, 00:13:17.523 "data_size": 0 00:13:17.523 }, 00:13:17.523 { 00:13:17.523 "name": "BaseBdev4", 00:13:17.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.523 "is_configured": false, 00:13:17.523 "data_offset": 0, 00:13:17.523 "data_size": 0 00:13:17.523 } 00:13:17.523 ] 00:13:17.523 }' 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.523 15:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.783 [2024-12-06 15:40:01.009262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:17.783 [2024-12-06 15:40:01.009335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.783 [2024-12-06 15:40:01.021296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.783 [2024-12-06 15:40:01.023839] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:17.783 [2024-12-06 15:40:01.024000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:17.783 [2024-12-06 15:40:01.024083] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:17.783 [2024-12-06 15:40:01.024131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:17.783 [2024-12-06 15:40:01.024161] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:17.783 [2024-12-06 15:40:01.024194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.783 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.042 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.042 "name": "Existed_Raid", 00:13:18.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.042 "strip_size_kb": 64, 00:13:18.042 "state": "configuring", 00:13:18.042 "raid_level": "raid0", 00:13:18.042 "superblock": false, 00:13:18.042 "num_base_bdevs": 4, 00:13:18.042 "num_base_bdevs_discovered": 1, 00:13:18.042 "num_base_bdevs_operational": 4, 00:13:18.042 "base_bdevs_list": [ 00:13:18.042 { 00:13:18.042 "name": "BaseBdev1", 00:13:18.042 "uuid": "65b3d986-b17b-40c5-a314-da4dbd7f67ff", 00:13:18.042 "is_configured": true, 00:13:18.042 "data_offset": 0, 00:13:18.042 "data_size": 65536 00:13:18.042 }, 00:13:18.042 { 00:13:18.042 "name": "BaseBdev2", 00:13:18.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.042 "is_configured": false, 00:13:18.042 "data_offset": 0, 00:13:18.042 "data_size": 0 00:13:18.042 }, 00:13:18.042 { 00:13:18.042 "name": "BaseBdev3", 00:13:18.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.042 "is_configured": false, 00:13:18.042 "data_offset": 0, 00:13:18.042 "data_size": 0 00:13:18.042 }, 00:13:18.042 { 00:13:18.042 "name": "BaseBdev4", 00:13:18.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.042 "is_configured": false, 00:13:18.042 "data_offset": 0, 00:13:18.042 "data_size": 0 00:13:18.042 } 00:13:18.042 ] 00:13:18.042 }' 00:13:18.042 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.042 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.302 BaseBdev2 00:13:18.302 [2024-12-06 15:40:01.463800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.302 [ 00:13:18.302 { 00:13:18.302 "name": "BaseBdev2", 00:13:18.302 "aliases": [ 00:13:18.302 "8ad74801-8307-4d7f-b801-e98235d38896" 00:13:18.302 ], 00:13:18.302 "product_name": "Malloc disk", 00:13:18.302 "block_size": 512, 00:13:18.302 "num_blocks": 65536, 00:13:18.302 "uuid": "8ad74801-8307-4d7f-b801-e98235d38896", 00:13:18.302 "assigned_rate_limits": { 00:13:18.302 "rw_ios_per_sec": 0, 00:13:18.302 "rw_mbytes_per_sec": 0, 00:13:18.302 "r_mbytes_per_sec": 0, 00:13:18.302 "w_mbytes_per_sec": 0 00:13:18.302 }, 00:13:18.302 "claimed": true, 00:13:18.302 "claim_type": "exclusive_write", 00:13:18.302 "zoned": false, 00:13:18.302 "supported_io_types": { 00:13:18.302 "read": true, 00:13:18.302 "write": true, 00:13:18.302 "unmap": true, 00:13:18.302 "flush": true, 00:13:18.302 "reset": true, 00:13:18.302 "nvme_admin": false, 00:13:18.302 "nvme_io": false, 00:13:18.302 "nvme_io_md": false, 00:13:18.302 "write_zeroes": true, 00:13:18.302 "zcopy": true, 00:13:18.302 "get_zone_info": false, 00:13:18.302 "zone_management": false, 00:13:18.302 "zone_append": false, 00:13:18.302 "compare": false, 00:13:18.302 "compare_and_write": false, 00:13:18.302 "abort": true, 00:13:18.302 "seek_hole": false, 00:13:18.302 "seek_data": false, 00:13:18.302 "copy": true, 00:13:18.302 "nvme_iov_md": false 00:13:18.302 }, 00:13:18.302 "memory_domains": [ 00:13:18.302 { 00:13:18.302 "dma_device_id": "system", 00:13:18.302 "dma_device_type": 1 00:13:18.302 }, 00:13:18.302 { 00:13:18.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.302 "dma_device_type": 2 00:13:18.302 } 00:13:18.302 ], 00:13:18.302 "driver_specific": {} 00:13:18.302 } 00:13:18.302 ] 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.302 "name": "Existed_Raid", 00:13:18.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.302 "strip_size_kb": 64, 00:13:18.302 "state": "configuring", 00:13:18.302 "raid_level": "raid0", 00:13:18.302 "superblock": false, 00:13:18.302 "num_base_bdevs": 4, 00:13:18.302 "num_base_bdevs_discovered": 2, 00:13:18.302 "num_base_bdevs_operational": 4, 00:13:18.302 "base_bdevs_list": [ 00:13:18.302 { 00:13:18.302 "name": "BaseBdev1", 00:13:18.302 "uuid": "65b3d986-b17b-40c5-a314-da4dbd7f67ff", 00:13:18.302 "is_configured": true, 00:13:18.302 "data_offset": 0, 00:13:18.302 "data_size": 65536 00:13:18.302 }, 00:13:18.302 { 00:13:18.302 "name": "BaseBdev2", 00:13:18.302 "uuid": "8ad74801-8307-4d7f-b801-e98235d38896", 00:13:18.302 "is_configured": true, 00:13:18.302 "data_offset": 0, 00:13:18.302 "data_size": 65536 00:13:18.302 }, 00:13:18.302 { 00:13:18.302 "name": "BaseBdev3", 00:13:18.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.302 "is_configured": false, 00:13:18.302 "data_offset": 0, 00:13:18.302 "data_size": 0 00:13:18.302 }, 00:13:18.302 { 00:13:18.302 "name": "BaseBdev4", 00:13:18.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.302 "is_configured": false, 00:13:18.302 "data_offset": 0, 00:13:18.302 "data_size": 0 00:13:18.302 } 00:13:18.302 ] 00:13:18.302 }' 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.302 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.871 [2024-12-06 15:40:01.938156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:18.871 BaseBdev3 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.871 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.871 [ 00:13:18.871 { 00:13:18.871 "name": "BaseBdev3", 00:13:18.871 "aliases": [ 00:13:18.871 "d16cb431-587d-4724-a070-6238ee96780c" 00:13:18.871 ], 00:13:18.871 "product_name": "Malloc disk", 00:13:18.871 "block_size": 512, 00:13:18.871 "num_blocks": 65536, 00:13:18.871 "uuid": "d16cb431-587d-4724-a070-6238ee96780c", 00:13:18.871 "assigned_rate_limits": { 00:13:18.871 "rw_ios_per_sec": 0, 00:13:18.871 "rw_mbytes_per_sec": 0, 00:13:18.871 "r_mbytes_per_sec": 0, 00:13:18.871 "w_mbytes_per_sec": 0 00:13:18.871 }, 00:13:18.871 "claimed": true, 00:13:18.871 "claim_type": "exclusive_write", 00:13:18.871 "zoned": false, 00:13:18.871 "supported_io_types": { 00:13:18.871 "read": true, 00:13:18.871 "write": true, 00:13:18.871 "unmap": true, 00:13:18.871 "flush": true, 00:13:18.871 "reset": true, 00:13:18.871 "nvme_admin": false, 00:13:18.871 "nvme_io": false, 00:13:18.871 "nvme_io_md": false, 00:13:18.871 "write_zeroes": true, 00:13:18.871 "zcopy": true, 00:13:18.871 "get_zone_info": false, 00:13:18.871 "zone_management": false, 00:13:18.871 "zone_append": false, 00:13:18.871 "compare": false, 00:13:18.871 "compare_and_write": false, 00:13:18.871 "abort": true, 00:13:18.871 "seek_hole": false, 00:13:18.871 "seek_data": false, 00:13:18.871 "copy": true, 00:13:18.871 "nvme_iov_md": false 00:13:18.871 }, 00:13:18.871 "memory_domains": [ 00:13:18.871 { 00:13:18.871 "dma_device_id": "system", 00:13:18.871 "dma_device_type": 1 00:13:18.872 }, 00:13:18.872 { 00:13:18.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.872 "dma_device_type": 2 00:13:18.872 } 00:13:18.872 ], 00:13:18.872 "driver_specific": {} 00:13:18.872 } 00:13:18.872 ] 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.872 15:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.872 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.872 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.872 "name": "Existed_Raid", 00:13:18.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.872 "strip_size_kb": 64, 00:13:18.872 "state": "configuring", 00:13:18.872 "raid_level": "raid0", 00:13:18.872 "superblock": false, 00:13:18.872 "num_base_bdevs": 4, 00:13:18.872 "num_base_bdevs_discovered": 3, 00:13:18.872 "num_base_bdevs_operational": 4, 00:13:18.872 "base_bdevs_list": [ 00:13:18.872 { 00:13:18.872 "name": "BaseBdev1", 00:13:18.872 "uuid": "65b3d986-b17b-40c5-a314-da4dbd7f67ff", 00:13:18.872 "is_configured": true, 00:13:18.872 "data_offset": 0, 00:13:18.872 "data_size": 65536 00:13:18.872 }, 00:13:18.872 { 00:13:18.872 "name": "BaseBdev2", 00:13:18.872 "uuid": "8ad74801-8307-4d7f-b801-e98235d38896", 00:13:18.872 "is_configured": true, 00:13:18.872 "data_offset": 0, 00:13:18.872 "data_size": 65536 00:13:18.872 }, 00:13:18.872 { 00:13:18.872 "name": "BaseBdev3", 00:13:18.872 "uuid": "d16cb431-587d-4724-a070-6238ee96780c", 00:13:18.872 "is_configured": true, 00:13:18.872 "data_offset": 0, 00:13:18.872 "data_size": 65536 00:13:18.872 }, 00:13:18.872 { 00:13:18.872 "name": "BaseBdev4", 00:13:18.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.872 "is_configured": false, 00:13:18.872 "data_offset": 0, 00:13:18.872 "data_size": 0 00:13:18.872 } 00:13:18.872 ] 00:13:18.872 }' 00:13:18.872 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.872 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.131 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:19.131 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.131 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.392 [2024-12-06 15:40:02.429121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:19.392 [2024-12-06 15:40:02.429382] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:19.392 [2024-12-06 15:40:02.429410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:19.392 [2024-12-06 15:40:02.429805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:19.392 [2024-12-06 15:40:02.430003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:19.392 [2024-12-06 15:40:02.430018] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:19.392 [2024-12-06 15:40:02.430361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.392 BaseBdev4 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.392 [ 00:13:19.392 { 00:13:19.392 "name": "BaseBdev4", 00:13:19.392 "aliases": [ 00:13:19.392 "9095ec31-256a-4210-b299-3ebb6c658b54" 00:13:19.392 ], 00:13:19.392 "product_name": "Malloc disk", 00:13:19.392 "block_size": 512, 00:13:19.392 "num_blocks": 65536, 00:13:19.392 "uuid": "9095ec31-256a-4210-b299-3ebb6c658b54", 00:13:19.392 "assigned_rate_limits": { 00:13:19.392 "rw_ios_per_sec": 0, 00:13:19.392 "rw_mbytes_per_sec": 0, 00:13:19.392 "r_mbytes_per_sec": 0, 00:13:19.392 "w_mbytes_per_sec": 0 00:13:19.392 }, 00:13:19.392 "claimed": true, 00:13:19.392 "claim_type": "exclusive_write", 00:13:19.392 "zoned": false, 00:13:19.392 "supported_io_types": { 00:13:19.392 "read": true, 00:13:19.392 "write": true, 00:13:19.392 "unmap": true, 00:13:19.392 "flush": true, 00:13:19.392 "reset": true, 00:13:19.392 "nvme_admin": false, 00:13:19.392 "nvme_io": false, 00:13:19.392 "nvme_io_md": false, 00:13:19.392 "write_zeroes": true, 00:13:19.392 "zcopy": true, 00:13:19.392 "get_zone_info": false, 00:13:19.392 "zone_management": false, 00:13:19.392 "zone_append": false, 00:13:19.392 "compare": false, 00:13:19.392 "compare_and_write": false, 00:13:19.392 "abort": true, 00:13:19.392 "seek_hole": false, 00:13:19.392 "seek_data": false, 00:13:19.392 "copy": true, 00:13:19.392 "nvme_iov_md": false 00:13:19.392 }, 00:13:19.392 "memory_domains": [ 00:13:19.392 { 00:13:19.392 "dma_device_id": "system", 00:13:19.392 "dma_device_type": 1 00:13:19.392 }, 00:13:19.392 { 00:13:19.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.392 "dma_device_type": 2 00:13:19.392 } 00:13:19.392 ], 00:13:19.392 "driver_specific": {} 00:13:19.392 } 00:13:19.392 ] 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.392 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.392 "name": "Existed_Raid", 00:13:19.392 "uuid": "e5b33ae3-3068-4d22-b9ab-80ff80e44130", 00:13:19.392 "strip_size_kb": 64, 00:13:19.392 "state": "online", 00:13:19.392 "raid_level": "raid0", 00:13:19.392 "superblock": false, 00:13:19.392 "num_base_bdevs": 4, 00:13:19.392 "num_base_bdevs_discovered": 4, 00:13:19.392 "num_base_bdevs_operational": 4, 00:13:19.392 "base_bdevs_list": [ 00:13:19.392 { 00:13:19.392 "name": "BaseBdev1", 00:13:19.393 "uuid": "65b3d986-b17b-40c5-a314-da4dbd7f67ff", 00:13:19.393 "is_configured": true, 00:13:19.393 "data_offset": 0, 00:13:19.393 "data_size": 65536 00:13:19.393 }, 00:13:19.393 { 00:13:19.393 "name": "BaseBdev2", 00:13:19.393 "uuid": "8ad74801-8307-4d7f-b801-e98235d38896", 00:13:19.393 "is_configured": true, 00:13:19.393 "data_offset": 0, 00:13:19.393 "data_size": 65536 00:13:19.393 }, 00:13:19.393 { 00:13:19.393 "name": "BaseBdev3", 00:13:19.393 "uuid": "d16cb431-587d-4724-a070-6238ee96780c", 00:13:19.393 "is_configured": true, 00:13:19.393 "data_offset": 0, 00:13:19.393 "data_size": 65536 00:13:19.393 }, 00:13:19.393 { 00:13:19.393 "name": "BaseBdev4", 00:13:19.393 "uuid": "9095ec31-256a-4210-b299-3ebb6c658b54", 00:13:19.393 "is_configured": true, 00:13:19.393 "data_offset": 0, 00:13:19.393 "data_size": 65536 00:13:19.393 } 00:13:19.393 ] 00:13:19.393 }' 00:13:19.393 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.393 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.653 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:19.653 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:19.653 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:19.653 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:19.653 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:19.653 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:19.653 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:19.653 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:19.653 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.653 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.653 [2024-12-06 15:40:02.900923] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.653 15:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.653 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:19.653 "name": "Existed_Raid", 00:13:19.653 "aliases": [ 00:13:19.653 "e5b33ae3-3068-4d22-b9ab-80ff80e44130" 00:13:19.653 ], 00:13:19.653 "product_name": "Raid Volume", 00:13:19.653 "block_size": 512, 00:13:19.653 "num_blocks": 262144, 00:13:19.653 "uuid": "e5b33ae3-3068-4d22-b9ab-80ff80e44130", 00:13:19.653 "assigned_rate_limits": { 00:13:19.653 "rw_ios_per_sec": 0, 00:13:19.653 "rw_mbytes_per_sec": 0, 00:13:19.653 "r_mbytes_per_sec": 0, 00:13:19.653 "w_mbytes_per_sec": 0 00:13:19.653 }, 00:13:19.653 "claimed": false, 00:13:19.653 "zoned": false, 00:13:19.653 "supported_io_types": { 00:13:19.653 "read": true, 00:13:19.653 "write": true, 00:13:19.653 "unmap": true, 00:13:19.653 "flush": true, 00:13:19.653 "reset": true, 00:13:19.653 "nvme_admin": false, 00:13:19.653 "nvme_io": false, 00:13:19.653 "nvme_io_md": false, 00:13:19.653 "write_zeroes": true, 00:13:19.653 "zcopy": false, 00:13:19.653 "get_zone_info": false, 00:13:19.653 "zone_management": false, 00:13:19.653 "zone_append": false, 00:13:19.653 "compare": false, 00:13:19.653 "compare_and_write": false, 00:13:19.653 "abort": false, 00:13:19.653 "seek_hole": false, 00:13:19.653 "seek_data": false, 00:13:19.653 "copy": false, 00:13:19.653 "nvme_iov_md": false 00:13:19.653 }, 00:13:19.653 "memory_domains": [ 00:13:19.653 { 00:13:19.653 "dma_device_id": "system", 00:13:19.653 "dma_device_type": 1 00:13:19.653 }, 00:13:19.653 { 00:13:19.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.653 "dma_device_type": 2 00:13:19.653 }, 00:13:19.653 { 00:13:19.653 "dma_device_id": "system", 00:13:19.653 "dma_device_type": 1 00:13:19.653 }, 00:13:19.653 { 00:13:19.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.653 "dma_device_type": 2 00:13:19.653 }, 00:13:19.653 { 00:13:19.653 "dma_device_id": "system", 00:13:19.653 "dma_device_type": 1 00:13:19.653 }, 00:13:19.653 { 00:13:19.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.653 "dma_device_type": 2 00:13:19.653 }, 00:13:19.653 { 00:13:19.653 "dma_device_id": "system", 00:13:19.653 "dma_device_type": 1 00:13:19.653 }, 00:13:19.653 { 00:13:19.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.653 "dma_device_type": 2 00:13:19.653 } 00:13:19.653 ], 00:13:19.653 "driver_specific": { 00:13:19.653 "raid": { 00:13:19.653 "uuid": "e5b33ae3-3068-4d22-b9ab-80ff80e44130", 00:13:19.653 "strip_size_kb": 64, 00:13:19.653 "state": "online", 00:13:19.653 "raid_level": "raid0", 00:13:19.653 "superblock": false, 00:13:19.653 "num_base_bdevs": 4, 00:13:19.653 "num_base_bdevs_discovered": 4, 00:13:19.653 "num_base_bdevs_operational": 4, 00:13:19.653 "base_bdevs_list": [ 00:13:19.653 { 00:13:19.653 "name": "BaseBdev1", 00:13:19.653 "uuid": "65b3d986-b17b-40c5-a314-da4dbd7f67ff", 00:13:19.653 "is_configured": true, 00:13:19.653 "data_offset": 0, 00:13:19.653 "data_size": 65536 00:13:19.653 }, 00:13:19.653 { 00:13:19.653 "name": "BaseBdev2", 00:13:19.653 "uuid": "8ad74801-8307-4d7f-b801-e98235d38896", 00:13:19.653 "is_configured": true, 00:13:19.653 "data_offset": 0, 00:13:19.653 "data_size": 65536 00:13:19.653 }, 00:13:19.653 { 00:13:19.653 "name": "BaseBdev3", 00:13:19.653 "uuid": "d16cb431-587d-4724-a070-6238ee96780c", 00:13:19.653 "is_configured": true, 00:13:19.653 "data_offset": 0, 00:13:19.653 "data_size": 65536 00:13:19.653 }, 00:13:19.653 { 00:13:19.653 "name": "BaseBdev4", 00:13:19.653 "uuid": "9095ec31-256a-4210-b299-3ebb6c658b54", 00:13:19.653 "is_configured": true, 00:13:19.653 "data_offset": 0, 00:13:19.653 "data_size": 65536 00:13:19.653 } 00:13:19.653 ] 00:13:19.653 } 00:13:19.653 } 00:13:19.653 }' 00:13:19.653 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:19.949 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:19.949 BaseBdev2 00:13:19.949 BaseBdev3 00:13:19.949 BaseBdev4' 00:13:19.949 15:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.949 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:19.949 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.949 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:19.949 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.949 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.949 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.949 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.949 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.950 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.950 [2024-12-06 15:40:03.208196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:19.950 [2024-12-06 15:40:03.208361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.950 [2024-12-06 15:40:03.208467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.209 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.210 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.210 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.210 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.210 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.210 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.210 "name": "Existed_Raid", 00:13:20.210 "uuid": "e5b33ae3-3068-4d22-b9ab-80ff80e44130", 00:13:20.210 "strip_size_kb": 64, 00:13:20.210 "state": "offline", 00:13:20.210 "raid_level": "raid0", 00:13:20.210 "superblock": false, 00:13:20.210 "num_base_bdevs": 4, 00:13:20.210 "num_base_bdevs_discovered": 3, 00:13:20.210 "num_base_bdevs_operational": 3, 00:13:20.210 "base_bdevs_list": [ 00:13:20.210 { 00:13:20.210 "name": null, 00:13:20.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.210 "is_configured": false, 00:13:20.210 "data_offset": 0, 00:13:20.210 "data_size": 65536 00:13:20.210 }, 00:13:20.210 { 00:13:20.210 "name": "BaseBdev2", 00:13:20.210 "uuid": "8ad74801-8307-4d7f-b801-e98235d38896", 00:13:20.210 "is_configured": true, 00:13:20.210 "data_offset": 0, 00:13:20.210 "data_size": 65536 00:13:20.210 }, 00:13:20.210 { 00:13:20.210 "name": "BaseBdev3", 00:13:20.210 "uuid": "d16cb431-587d-4724-a070-6238ee96780c", 00:13:20.210 "is_configured": true, 00:13:20.210 "data_offset": 0, 00:13:20.210 "data_size": 65536 00:13:20.210 }, 00:13:20.210 { 00:13:20.210 "name": "BaseBdev4", 00:13:20.210 "uuid": "9095ec31-256a-4210-b299-3ebb6c658b54", 00:13:20.210 "is_configured": true, 00:13:20.210 "data_offset": 0, 00:13:20.210 "data_size": 65536 00:13:20.210 } 00:13:20.210 ] 00:13:20.210 }' 00:13:20.210 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.210 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.469 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:20.469 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:20.469 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.469 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:20.469 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.469 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.728 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.728 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:20.728 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.729 [2024-12-06 15:40:03.798133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.729 15:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.729 [2024-12-06 15:40:03.956456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.989 [2024-12-06 15:40:04.112719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:20.989 [2024-12-06 15:40:04.112898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.989 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.249 BaseBdev2 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.249 [ 00:13:21.249 { 00:13:21.249 "name": "BaseBdev2", 00:13:21.249 "aliases": [ 00:13:21.249 "1ce9f913-40a0-42b0-974e-e97401c33ec5" 00:13:21.249 ], 00:13:21.249 "product_name": "Malloc disk", 00:13:21.249 "block_size": 512, 00:13:21.249 "num_blocks": 65536, 00:13:21.249 "uuid": "1ce9f913-40a0-42b0-974e-e97401c33ec5", 00:13:21.249 "assigned_rate_limits": { 00:13:21.249 "rw_ios_per_sec": 0, 00:13:21.249 "rw_mbytes_per_sec": 0, 00:13:21.249 "r_mbytes_per_sec": 0, 00:13:21.249 "w_mbytes_per_sec": 0 00:13:21.249 }, 00:13:21.249 "claimed": false, 00:13:21.249 "zoned": false, 00:13:21.249 "supported_io_types": { 00:13:21.249 "read": true, 00:13:21.249 "write": true, 00:13:21.249 "unmap": true, 00:13:21.249 "flush": true, 00:13:21.249 "reset": true, 00:13:21.249 "nvme_admin": false, 00:13:21.249 "nvme_io": false, 00:13:21.249 "nvme_io_md": false, 00:13:21.249 "write_zeroes": true, 00:13:21.249 "zcopy": true, 00:13:21.249 "get_zone_info": false, 00:13:21.249 "zone_management": false, 00:13:21.249 "zone_append": false, 00:13:21.249 "compare": false, 00:13:21.249 "compare_and_write": false, 00:13:21.249 "abort": true, 00:13:21.249 "seek_hole": false, 00:13:21.249 "seek_data": false, 00:13:21.249 "copy": true, 00:13:21.249 "nvme_iov_md": false 00:13:21.249 }, 00:13:21.249 "memory_domains": [ 00:13:21.249 { 00:13:21.249 "dma_device_id": "system", 00:13:21.249 "dma_device_type": 1 00:13:21.249 }, 00:13:21.249 { 00:13:21.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.249 "dma_device_type": 2 00:13:21.249 } 00:13:21.249 ], 00:13:21.249 "driver_specific": {} 00:13:21.249 } 00:13:21.249 ] 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.249 BaseBdev3 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:21.249 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.250 [ 00:13:21.250 { 00:13:21.250 "name": "BaseBdev3", 00:13:21.250 "aliases": [ 00:13:21.250 "8041fb5d-6928-44ad-a773-364404ea951f" 00:13:21.250 ], 00:13:21.250 "product_name": "Malloc disk", 00:13:21.250 "block_size": 512, 00:13:21.250 "num_blocks": 65536, 00:13:21.250 "uuid": "8041fb5d-6928-44ad-a773-364404ea951f", 00:13:21.250 "assigned_rate_limits": { 00:13:21.250 "rw_ios_per_sec": 0, 00:13:21.250 "rw_mbytes_per_sec": 0, 00:13:21.250 "r_mbytes_per_sec": 0, 00:13:21.250 "w_mbytes_per_sec": 0 00:13:21.250 }, 00:13:21.250 "claimed": false, 00:13:21.250 "zoned": false, 00:13:21.250 "supported_io_types": { 00:13:21.250 "read": true, 00:13:21.250 "write": true, 00:13:21.250 "unmap": true, 00:13:21.250 "flush": true, 00:13:21.250 "reset": true, 00:13:21.250 "nvme_admin": false, 00:13:21.250 "nvme_io": false, 00:13:21.250 "nvme_io_md": false, 00:13:21.250 "write_zeroes": true, 00:13:21.250 "zcopy": true, 00:13:21.250 "get_zone_info": false, 00:13:21.250 "zone_management": false, 00:13:21.250 "zone_append": false, 00:13:21.250 "compare": false, 00:13:21.250 "compare_and_write": false, 00:13:21.250 "abort": true, 00:13:21.250 "seek_hole": false, 00:13:21.250 "seek_data": false, 00:13:21.250 "copy": true, 00:13:21.250 "nvme_iov_md": false 00:13:21.250 }, 00:13:21.250 "memory_domains": [ 00:13:21.250 { 00:13:21.250 "dma_device_id": "system", 00:13:21.250 "dma_device_type": 1 00:13:21.250 }, 00:13:21.250 { 00:13:21.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.250 "dma_device_type": 2 00:13:21.250 } 00:13:21.250 ], 00:13:21.250 "driver_specific": {} 00:13:21.250 } 00:13:21.250 ] 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.250 BaseBdev4 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.250 [ 00:13:21.250 { 00:13:21.250 "name": "BaseBdev4", 00:13:21.250 "aliases": [ 00:13:21.250 "0a368443-18f6-4bee-9bf2-27101e458f29" 00:13:21.250 ], 00:13:21.250 "product_name": "Malloc disk", 00:13:21.250 "block_size": 512, 00:13:21.250 "num_blocks": 65536, 00:13:21.250 "uuid": "0a368443-18f6-4bee-9bf2-27101e458f29", 00:13:21.250 "assigned_rate_limits": { 00:13:21.250 "rw_ios_per_sec": 0, 00:13:21.250 "rw_mbytes_per_sec": 0, 00:13:21.250 "r_mbytes_per_sec": 0, 00:13:21.250 "w_mbytes_per_sec": 0 00:13:21.250 }, 00:13:21.250 "claimed": false, 00:13:21.250 "zoned": false, 00:13:21.250 "supported_io_types": { 00:13:21.250 "read": true, 00:13:21.250 "write": true, 00:13:21.250 "unmap": true, 00:13:21.250 "flush": true, 00:13:21.250 "reset": true, 00:13:21.250 "nvme_admin": false, 00:13:21.250 "nvme_io": false, 00:13:21.250 "nvme_io_md": false, 00:13:21.250 "write_zeroes": true, 00:13:21.250 "zcopy": true, 00:13:21.250 "get_zone_info": false, 00:13:21.250 "zone_management": false, 00:13:21.250 "zone_append": false, 00:13:21.250 "compare": false, 00:13:21.250 "compare_and_write": false, 00:13:21.250 "abort": true, 00:13:21.250 "seek_hole": false, 00:13:21.250 "seek_data": false, 00:13:21.250 "copy": true, 00:13:21.250 "nvme_iov_md": false 00:13:21.250 }, 00:13:21.250 "memory_domains": [ 00:13:21.250 { 00:13:21.250 "dma_device_id": "system", 00:13:21.250 "dma_device_type": 1 00:13:21.250 }, 00:13:21.250 { 00:13:21.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.250 "dma_device_type": 2 00:13:21.250 } 00:13:21.250 ], 00:13:21.250 "driver_specific": {} 00:13:21.250 } 00:13:21.250 ] 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.250 [2024-12-06 15:40:04.492008] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.250 [2024-12-06 15:40:04.492174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.250 [2024-12-06 15:40:04.492279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.250 [2024-12-06 15:40:04.494767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:21.250 [2024-12-06 15:40:04.494935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.250 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.509 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.509 "name": "Existed_Raid", 00:13:21.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.509 "strip_size_kb": 64, 00:13:21.509 "state": "configuring", 00:13:21.509 "raid_level": "raid0", 00:13:21.509 "superblock": false, 00:13:21.509 "num_base_bdevs": 4, 00:13:21.509 "num_base_bdevs_discovered": 3, 00:13:21.509 "num_base_bdevs_operational": 4, 00:13:21.509 "base_bdevs_list": [ 00:13:21.509 { 00:13:21.509 "name": "BaseBdev1", 00:13:21.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.509 "is_configured": false, 00:13:21.509 "data_offset": 0, 00:13:21.509 "data_size": 0 00:13:21.509 }, 00:13:21.509 { 00:13:21.509 "name": "BaseBdev2", 00:13:21.509 "uuid": "1ce9f913-40a0-42b0-974e-e97401c33ec5", 00:13:21.509 "is_configured": true, 00:13:21.509 "data_offset": 0, 00:13:21.509 "data_size": 65536 00:13:21.509 }, 00:13:21.509 { 00:13:21.509 "name": "BaseBdev3", 00:13:21.509 "uuid": "8041fb5d-6928-44ad-a773-364404ea951f", 00:13:21.509 "is_configured": true, 00:13:21.509 "data_offset": 0, 00:13:21.509 "data_size": 65536 00:13:21.509 }, 00:13:21.509 { 00:13:21.509 "name": "BaseBdev4", 00:13:21.509 "uuid": "0a368443-18f6-4bee-9bf2-27101e458f29", 00:13:21.509 "is_configured": true, 00:13:21.509 "data_offset": 0, 00:13:21.510 "data_size": 65536 00:13:21.510 } 00:13:21.510 ] 00:13:21.510 }' 00:13:21.510 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.510 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.769 [2024-12-06 15:40:04.895642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.769 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.770 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.770 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.770 "name": "Existed_Raid", 00:13:21.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.770 "strip_size_kb": 64, 00:13:21.770 "state": "configuring", 00:13:21.770 "raid_level": "raid0", 00:13:21.770 "superblock": false, 00:13:21.770 "num_base_bdevs": 4, 00:13:21.770 "num_base_bdevs_discovered": 2, 00:13:21.770 "num_base_bdevs_operational": 4, 00:13:21.770 "base_bdevs_list": [ 00:13:21.770 { 00:13:21.770 "name": "BaseBdev1", 00:13:21.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.770 "is_configured": false, 00:13:21.770 "data_offset": 0, 00:13:21.770 "data_size": 0 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "name": null, 00:13:21.770 "uuid": "1ce9f913-40a0-42b0-974e-e97401c33ec5", 00:13:21.770 "is_configured": false, 00:13:21.770 "data_offset": 0, 00:13:21.770 "data_size": 65536 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "name": "BaseBdev3", 00:13:21.770 "uuid": "8041fb5d-6928-44ad-a773-364404ea951f", 00:13:21.770 "is_configured": true, 00:13:21.770 "data_offset": 0, 00:13:21.770 "data_size": 65536 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "name": "BaseBdev4", 00:13:21.770 "uuid": "0a368443-18f6-4bee-9bf2-27101e458f29", 00:13:21.770 "is_configured": true, 00:13:21.770 "data_offset": 0, 00:13:21.770 "data_size": 65536 00:13:21.770 } 00:13:21.770 ] 00:13:21.770 }' 00:13:21.770 15:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.770 15:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.028 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:22.028 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.028 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.286 [2024-12-06 15:40:05.374696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.286 BaseBdev1 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.286 [ 00:13:22.286 { 00:13:22.286 "name": "BaseBdev1", 00:13:22.286 "aliases": [ 00:13:22.286 "355a1f0c-13a2-472a-ad86-fef3357200b8" 00:13:22.286 ], 00:13:22.286 "product_name": "Malloc disk", 00:13:22.286 "block_size": 512, 00:13:22.286 "num_blocks": 65536, 00:13:22.286 "uuid": "355a1f0c-13a2-472a-ad86-fef3357200b8", 00:13:22.286 "assigned_rate_limits": { 00:13:22.286 "rw_ios_per_sec": 0, 00:13:22.286 "rw_mbytes_per_sec": 0, 00:13:22.286 "r_mbytes_per_sec": 0, 00:13:22.286 "w_mbytes_per_sec": 0 00:13:22.286 }, 00:13:22.286 "claimed": true, 00:13:22.286 "claim_type": "exclusive_write", 00:13:22.286 "zoned": false, 00:13:22.286 "supported_io_types": { 00:13:22.286 "read": true, 00:13:22.286 "write": true, 00:13:22.286 "unmap": true, 00:13:22.286 "flush": true, 00:13:22.286 "reset": true, 00:13:22.286 "nvme_admin": false, 00:13:22.286 "nvme_io": false, 00:13:22.286 "nvme_io_md": false, 00:13:22.286 "write_zeroes": true, 00:13:22.286 "zcopy": true, 00:13:22.286 "get_zone_info": false, 00:13:22.286 "zone_management": false, 00:13:22.286 "zone_append": false, 00:13:22.286 "compare": false, 00:13:22.286 "compare_and_write": false, 00:13:22.286 "abort": true, 00:13:22.286 "seek_hole": false, 00:13:22.286 "seek_data": false, 00:13:22.286 "copy": true, 00:13:22.286 "nvme_iov_md": false 00:13:22.286 }, 00:13:22.286 "memory_domains": [ 00:13:22.286 { 00:13:22.286 "dma_device_id": "system", 00:13:22.286 "dma_device_type": 1 00:13:22.286 }, 00:13:22.286 { 00:13:22.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.286 "dma_device_type": 2 00:13:22.286 } 00:13:22.286 ], 00:13:22.286 "driver_specific": {} 00:13:22.286 } 00:13:22.286 ] 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.286 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.287 "name": "Existed_Raid", 00:13:22.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.287 "strip_size_kb": 64, 00:13:22.287 "state": "configuring", 00:13:22.287 "raid_level": "raid0", 00:13:22.287 "superblock": false, 00:13:22.287 "num_base_bdevs": 4, 00:13:22.287 "num_base_bdevs_discovered": 3, 00:13:22.287 "num_base_bdevs_operational": 4, 00:13:22.287 "base_bdevs_list": [ 00:13:22.287 { 00:13:22.287 "name": "BaseBdev1", 00:13:22.287 "uuid": "355a1f0c-13a2-472a-ad86-fef3357200b8", 00:13:22.287 "is_configured": true, 00:13:22.287 "data_offset": 0, 00:13:22.287 "data_size": 65536 00:13:22.287 }, 00:13:22.287 { 00:13:22.287 "name": null, 00:13:22.287 "uuid": "1ce9f913-40a0-42b0-974e-e97401c33ec5", 00:13:22.287 "is_configured": false, 00:13:22.287 "data_offset": 0, 00:13:22.287 "data_size": 65536 00:13:22.287 }, 00:13:22.287 { 00:13:22.287 "name": "BaseBdev3", 00:13:22.287 "uuid": "8041fb5d-6928-44ad-a773-364404ea951f", 00:13:22.287 "is_configured": true, 00:13:22.287 "data_offset": 0, 00:13:22.287 "data_size": 65536 00:13:22.287 }, 00:13:22.287 { 00:13:22.287 "name": "BaseBdev4", 00:13:22.287 "uuid": "0a368443-18f6-4bee-9bf2-27101e458f29", 00:13:22.287 "is_configured": true, 00:13:22.287 "data_offset": 0, 00:13:22.287 "data_size": 65536 00:13:22.287 } 00:13:22.287 ] 00:13:22.287 }' 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.287 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.546 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.546 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.546 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.546 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:22.546 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.808 [2024-12-06 15:40:05.866241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.808 "name": "Existed_Raid", 00:13:22.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.808 "strip_size_kb": 64, 00:13:22.808 "state": "configuring", 00:13:22.808 "raid_level": "raid0", 00:13:22.808 "superblock": false, 00:13:22.808 "num_base_bdevs": 4, 00:13:22.808 "num_base_bdevs_discovered": 2, 00:13:22.808 "num_base_bdevs_operational": 4, 00:13:22.808 "base_bdevs_list": [ 00:13:22.808 { 00:13:22.808 "name": "BaseBdev1", 00:13:22.808 "uuid": "355a1f0c-13a2-472a-ad86-fef3357200b8", 00:13:22.808 "is_configured": true, 00:13:22.808 "data_offset": 0, 00:13:22.808 "data_size": 65536 00:13:22.808 }, 00:13:22.808 { 00:13:22.808 "name": null, 00:13:22.808 "uuid": "1ce9f913-40a0-42b0-974e-e97401c33ec5", 00:13:22.808 "is_configured": false, 00:13:22.808 "data_offset": 0, 00:13:22.808 "data_size": 65536 00:13:22.808 }, 00:13:22.808 { 00:13:22.808 "name": null, 00:13:22.808 "uuid": "8041fb5d-6928-44ad-a773-364404ea951f", 00:13:22.808 "is_configured": false, 00:13:22.808 "data_offset": 0, 00:13:22.808 "data_size": 65536 00:13:22.808 }, 00:13:22.808 { 00:13:22.808 "name": "BaseBdev4", 00:13:22.808 "uuid": "0a368443-18f6-4bee-9bf2-27101e458f29", 00:13:22.808 "is_configured": true, 00:13:22.808 "data_offset": 0, 00:13:22.808 "data_size": 65536 00:13:22.808 } 00:13:22.808 ] 00:13:22.808 }' 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.808 15:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.067 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.067 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.067 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.067 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:23.067 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.067 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:23.067 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:23.067 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.067 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.067 [2024-12-06 15:40:06.305673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.067 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.068 "name": "Existed_Raid", 00:13:23.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.068 "strip_size_kb": 64, 00:13:23.068 "state": "configuring", 00:13:23.068 "raid_level": "raid0", 00:13:23.068 "superblock": false, 00:13:23.068 "num_base_bdevs": 4, 00:13:23.068 "num_base_bdevs_discovered": 3, 00:13:23.068 "num_base_bdevs_operational": 4, 00:13:23.068 "base_bdevs_list": [ 00:13:23.068 { 00:13:23.068 "name": "BaseBdev1", 00:13:23.068 "uuid": "355a1f0c-13a2-472a-ad86-fef3357200b8", 00:13:23.068 "is_configured": true, 00:13:23.068 "data_offset": 0, 00:13:23.068 "data_size": 65536 00:13:23.068 }, 00:13:23.068 { 00:13:23.068 "name": null, 00:13:23.068 "uuid": "1ce9f913-40a0-42b0-974e-e97401c33ec5", 00:13:23.068 "is_configured": false, 00:13:23.068 "data_offset": 0, 00:13:23.068 "data_size": 65536 00:13:23.068 }, 00:13:23.068 { 00:13:23.068 "name": "BaseBdev3", 00:13:23.068 "uuid": "8041fb5d-6928-44ad-a773-364404ea951f", 00:13:23.068 "is_configured": true, 00:13:23.068 "data_offset": 0, 00:13:23.068 "data_size": 65536 00:13:23.068 }, 00:13:23.068 { 00:13:23.068 "name": "BaseBdev4", 00:13:23.068 "uuid": "0a368443-18f6-4bee-9bf2-27101e458f29", 00:13:23.068 "is_configured": true, 00:13:23.068 "data_offset": 0, 00:13:23.068 "data_size": 65536 00:13:23.068 } 00:13:23.068 ] 00:13:23.068 }' 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.068 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.635 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.635 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:23.635 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.635 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.635 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.635 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:23.635 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:23.635 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.635 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.636 [2024-12-06 15:40:06.741398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.636 "name": "Existed_Raid", 00:13:23.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.636 "strip_size_kb": 64, 00:13:23.636 "state": "configuring", 00:13:23.636 "raid_level": "raid0", 00:13:23.636 "superblock": false, 00:13:23.636 "num_base_bdevs": 4, 00:13:23.636 "num_base_bdevs_discovered": 2, 00:13:23.636 "num_base_bdevs_operational": 4, 00:13:23.636 "base_bdevs_list": [ 00:13:23.636 { 00:13:23.636 "name": null, 00:13:23.636 "uuid": "355a1f0c-13a2-472a-ad86-fef3357200b8", 00:13:23.636 "is_configured": false, 00:13:23.636 "data_offset": 0, 00:13:23.636 "data_size": 65536 00:13:23.636 }, 00:13:23.636 { 00:13:23.636 "name": null, 00:13:23.636 "uuid": "1ce9f913-40a0-42b0-974e-e97401c33ec5", 00:13:23.636 "is_configured": false, 00:13:23.636 "data_offset": 0, 00:13:23.636 "data_size": 65536 00:13:23.636 }, 00:13:23.636 { 00:13:23.636 "name": "BaseBdev3", 00:13:23.636 "uuid": "8041fb5d-6928-44ad-a773-364404ea951f", 00:13:23.636 "is_configured": true, 00:13:23.636 "data_offset": 0, 00:13:23.636 "data_size": 65536 00:13:23.636 }, 00:13:23.636 { 00:13:23.636 "name": "BaseBdev4", 00:13:23.636 "uuid": "0a368443-18f6-4bee-9bf2-27101e458f29", 00:13:23.636 "is_configured": true, 00:13:23.636 "data_offset": 0, 00:13:23.636 "data_size": 65536 00:13:23.636 } 00:13:23.636 ] 00:13:23.636 }' 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.636 15:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.203 [2024-12-06 15:40:07.329337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.203 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.204 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.204 "name": "Existed_Raid", 00:13:24.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.204 "strip_size_kb": 64, 00:13:24.204 "state": "configuring", 00:13:24.204 "raid_level": "raid0", 00:13:24.204 "superblock": false, 00:13:24.204 "num_base_bdevs": 4, 00:13:24.204 "num_base_bdevs_discovered": 3, 00:13:24.204 "num_base_bdevs_operational": 4, 00:13:24.204 "base_bdevs_list": [ 00:13:24.204 { 00:13:24.204 "name": null, 00:13:24.204 "uuid": "355a1f0c-13a2-472a-ad86-fef3357200b8", 00:13:24.204 "is_configured": false, 00:13:24.204 "data_offset": 0, 00:13:24.204 "data_size": 65536 00:13:24.204 }, 00:13:24.204 { 00:13:24.204 "name": "BaseBdev2", 00:13:24.204 "uuid": "1ce9f913-40a0-42b0-974e-e97401c33ec5", 00:13:24.204 "is_configured": true, 00:13:24.204 "data_offset": 0, 00:13:24.204 "data_size": 65536 00:13:24.204 }, 00:13:24.204 { 00:13:24.204 "name": "BaseBdev3", 00:13:24.204 "uuid": "8041fb5d-6928-44ad-a773-364404ea951f", 00:13:24.204 "is_configured": true, 00:13:24.204 "data_offset": 0, 00:13:24.204 "data_size": 65536 00:13:24.204 }, 00:13:24.204 { 00:13:24.204 "name": "BaseBdev4", 00:13:24.204 "uuid": "0a368443-18f6-4bee-9bf2-27101e458f29", 00:13:24.204 "is_configured": true, 00:13:24.204 "data_offset": 0, 00:13:24.204 "data_size": 65536 00:13:24.204 } 00:13:24.204 ] 00:13:24.204 }' 00:13:24.204 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.204 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 355a1f0c-13a2-472a-ad86-fef3357200b8 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.772 [2024-12-06 15:40:07.929136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:24.772 NewBaseBdev 00:13:24.772 [2024-12-06 15:40:07.929335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:24.772 [2024-12-06 15:40:07.929359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:24.772 [2024-12-06 15:40:07.929755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:24.772 [2024-12-06 15:40:07.929932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:24.772 [2024-12-06 15:40:07.929946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:24.772 [2024-12-06 15:40:07.930226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.772 [ 00:13:24.772 { 00:13:24.772 "name": "NewBaseBdev", 00:13:24.772 "aliases": [ 00:13:24.772 "355a1f0c-13a2-472a-ad86-fef3357200b8" 00:13:24.772 ], 00:13:24.772 "product_name": "Malloc disk", 00:13:24.772 "block_size": 512, 00:13:24.772 "num_blocks": 65536, 00:13:24.772 "uuid": "355a1f0c-13a2-472a-ad86-fef3357200b8", 00:13:24.772 "assigned_rate_limits": { 00:13:24.772 "rw_ios_per_sec": 0, 00:13:24.772 "rw_mbytes_per_sec": 0, 00:13:24.772 "r_mbytes_per_sec": 0, 00:13:24.772 "w_mbytes_per_sec": 0 00:13:24.772 }, 00:13:24.772 "claimed": true, 00:13:24.772 "claim_type": "exclusive_write", 00:13:24.772 "zoned": false, 00:13:24.772 "supported_io_types": { 00:13:24.772 "read": true, 00:13:24.772 "write": true, 00:13:24.772 "unmap": true, 00:13:24.772 "flush": true, 00:13:24.772 "reset": true, 00:13:24.772 "nvme_admin": false, 00:13:24.772 "nvme_io": false, 00:13:24.772 "nvme_io_md": false, 00:13:24.772 "write_zeroes": true, 00:13:24.772 "zcopy": true, 00:13:24.772 "get_zone_info": false, 00:13:24.772 "zone_management": false, 00:13:24.772 "zone_append": false, 00:13:24.772 "compare": false, 00:13:24.772 "compare_and_write": false, 00:13:24.772 "abort": true, 00:13:24.772 "seek_hole": false, 00:13:24.772 "seek_data": false, 00:13:24.772 "copy": true, 00:13:24.772 "nvme_iov_md": false 00:13:24.772 }, 00:13:24.772 "memory_domains": [ 00:13:24.772 { 00:13:24.772 "dma_device_id": "system", 00:13:24.772 "dma_device_type": 1 00:13:24.772 }, 00:13:24.772 { 00:13:24.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.772 "dma_device_type": 2 00:13:24.772 } 00:13:24.772 ], 00:13:24.772 "driver_specific": {} 00:13:24.772 } 00:13:24.772 ] 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.772 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.773 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.773 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.773 15:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.773 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.773 15:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.773 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.773 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.773 "name": "Existed_Raid", 00:13:24.773 "uuid": "84b9bc55-68c6-4c98-bbaf-ad380a0dc02c", 00:13:24.773 "strip_size_kb": 64, 00:13:24.773 "state": "online", 00:13:24.773 "raid_level": "raid0", 00:13:24.773 "superblock": false, 00:13:24.773 "num_base_bdevs": 4, 00:13:24.773 "num_base_bdevs_discovered": 4, 00:13:24.773 "num_base_bdevs_operational": 4, 00:13:24.773 "base_bdevs_list": [ 00:13:24.773 { 00:13:24.773 "name": "NewBaseBdev", 00:13:24.773 "uuid": "355a1f0c-13a2-472a-ad86-fef3357200b8", 00:13:24.773 "is_configured": true, 00:13:24.773 "data_offset": 0, 00:13:24.773 "data_size": 65536 00:13:24.773 }, 00:13:24.773 { 00:13:24.773 "name": "BaseBdev2", 00:13:24.773 "uuid": "1ce9f913-40a0-42b0-974e-e97401c33ec5", 00:13:24.773 "is_configured": true, 00:13:24.773 "data_offset": 0, 00:13:24.773 "data_size": 65536 00:13:24.773 }, 00:13:24.773 { 00:13:24.773 "name": "BaseBdev3", 00:13:24.773 "uuid": "8041fb5d-6928-44ad-a773-364404ea951f", 00:13:24.773 "is_configured": true, 00:13:24.773 "data_offset": 0, 00:13:24.773 "data_size": 65536 00:13:24.773 }, 00:13:24.773 { 00:13:24.773 "name": "BaseBdev4", 00:13:24.773 "uuid": "0a368443-18f6-4bee-9bf2-27101e458f29", 00:13:24.773 "is_configured": true, 00:13:24.773 "data_offset": 0, 00:13:24.773 "data_size": 65536 00:13:24.773 } 00:13:24.773 ] 00:13:24.773 }' 00:13:24.773 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.773 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.341 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:25.341 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:25.341 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:25.341 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:25.341 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:25.341 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:25.341 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:25.341 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:25.341 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.341 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.341 [2024-12-06 15:40:08.417052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.341 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.341 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:25.341 "name": "Existed_Raid", 00:13:25.341 "aliases": [ 00:13:25.341 "84b9bc55-68c6-4c98-bbaf-ad380a0dc02c" 00:13:25.341 ], 00:13:25.341 "product_name": "Raid Volume", 00:13:25.341 "block_size": 512, 00:13:25.341 "num_blocks": 262144, 00:13:25.341 "uuid": "84b9bc55-68c6-4c98-bbaf-ad380a0dc02c", 00:13:25.341 "assigned_rate_limits": { 00:13:25.341 "rw_ios_per_sec": 0, 00:13:25.341 "rw_mbytes_per_sec": 0, 00:13:25.341 "r_mbytes_per_sec": 0, 00:13:25.341 "w_mbytes_per_sec": 0 00:13:25.341 }, 00:13:25.341 "claimed": false, 00:13:25.341 "zoned": false, 00:13:25.341 "supported_io_types": { 00:13:25.341 "read": true, 00:13:25.341 "write": true, 00:13:25.341 "unmap": true, 00:13:25.341 "flush": true, 00:13:25.341 "reset": true, 00:13:25.341 "nvme_admin": false, 00:13:25.341 "nvme_io": false, 00:13:25.341 "nvme_io_md": false, 00:13:25.341 "write_zeroes": true, 00:13:25.341 "zcopy": false, 00:13:25.341 "get_zone_info": false, 00:13:25.341 "zone_management": false, 00:13:25.341 "zone_append": false, 00:13:25.341 "compare": false, 00:13:25.341 "compare_and_write": false, 00:13:25.341 "abort": false, 00:13:25.341 "seek_hole": false, 00:13:25.341 "seek_data": false, 00:13:25.341 "copy": false, 00:13:25.341 "nvme_iov_md": false 00:13:25.341 }, 00:13:25.341 "memory_domains": [ 00:13:25.341 { 00:13:25.341 "dma_device_id": "system", 00:13:25.341 "dma_device_type": 1 00:13:25.341 }, 00:13:25.341 { 00:13:25.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.341 "dma_device_type": 2 00:13:25.341 }, 00:13:25.341 { 00:13:25.341 "dma_device_id": "system", 00:13:25.341 "dma_device_type": 1 00:13:25.341 }, 00:13:25.341 { 00:13:25.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.341 "dma_device_type": 2 00:13:25.341 }, 00:13:25.341 { 00:13:25.341 "dma_device_id": "system", 00:13:25.341 "dma_device_type": 1 00:13:25.341 }, 00:13:25.341 { 00:13:25.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.341 "dma_device_type": 2 00:13:25.341 }, 00:13:25.341 { 00:13:25.341 "dma_device_id": "system", 00:13:25.341 "dma_device_type": 1 00:13:25.341 }, 00:13:25.341 { 00:13:25.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.341 "dma_device_type": 2 00:13:25.341 } 00:13:25.341 ], 00:13:25.341 "driver_specific": { 00:13:25.341 "raid": { 00:13:25.341 "uuid": "84b9bc55-68c6-4c98-bbaf-ad380a0dc02c", 00:13:25.341 "strip_size_kb": 64, 00:13:25.341 "state": "online", 00:13:25.341 "raid_level": "raid0", 00:13:25.341 "superblock": false, 00:13:25.341 "num_base_bdevs": 4, 00:13:25.341 "num_base_bdevs_discovered": 4, 00:13:25.341 "num_base_bdevs_operational": 4, 00:13:25.341 "base_bdevs_list": [ 00:13:25.341 { 00:13:25.341 "name": "NewBaseBdev", 00:13:25.341 "uuid": "355a1f0c-13a2-472a-ad86-fef3357200b8", 00:13:25.341 "is_configured": true, 00:13:25.341 "data_offset": 0, 00:13:25.341 "data_size": 65536 00:13:25.341 }, 00:13:25.341 { 00:13:25.341 "name": "BaseBdev2", 00:13:25.341 "uuid": "1ce9f913-40a0-42b0-974e-e97401c33ec5", 00:13:25.341 "is_configured": true, 00:13:25.341 "data_offset": 0, 00:13:25.341 "data_size": 65536 00:13:25.341 }, 00:13:25.341 { 00:13:25.341 "name": "BaseBdev3", 00:13:25.341 "uuid": "8041fb5d-6928-44ad-a773-364404ea951f", 00:13:25.341 "is_configured": true, 00:13:25.341 "data_offset": 0, 00:13:25.341 "data_size": 65536 00:13:25.341 }, 00:13:25.341 { 00:13:25.341 "name": "BaseBdev4", 00:13:25.342 "uuid": "0a368443-18f6-4bee-9bf2-27101e458f29", 00:13:25.342 "is_configured": true, 00:13:25.342 "data_offset": 0, 00:13:25.342 "data_size": 65536 00:13:25.342 } 00:13:25.342 ] 00:13:25.342 } 00:13:25.342 } 00:13:25.342 }' 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:25.342 BaseBdev2 00:13:25.342 BaseBdev3 00:13:25.342 BaseBdev4' 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.342 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.601 [2024-12-06 15:40:08.720412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:25.601 [2024-12-06 15:40:08.720598] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.601 [2024-12-06 15:40:08.720882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.601 [2024-12-06 15:40:08.721051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.601 [2024-12-06 15:40:08.721153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69404 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69404 ']' 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69404 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69404 00:13:25.601 killing process with pid 69404 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69404' 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69404 00:13:25.601 [2024-12-06 15:40:08.770351] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:25.601 15:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69404 00:13:26.169 [2024-12-06 15:40:09.211428] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:27.548 00:13:27.548 real 0m11.479s 00:13:27.548 user 0m17.788s 00:13:27.548 sys 0m2.497s 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.548 ************************************ 00:13:27.548 END TEST raid_state_function_test 00:13:27.548 ************************************ 00:13:27.548 15:40:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:27.548 15:40:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:27.548 15:40:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.548 15:40:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:27.548 ************************************ 00:13:27.548 START TEST raid_state_function_test_sb 00:13:27.548 ************************************ 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:27.548 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:27.549 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:27.549 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:27.549 Process raid pid: 70070 00:13:27.549 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70070 00:13:27.549 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:27.549 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70070' 00:13:27.549 15:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70070 00:13:27.549 15:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70070 ']' 00:13:27.549 15:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.549 15:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.549 15:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.549 15:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.549 15:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.549 [2024-12-06 15:40:10.675386] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:13:27.549 [2024-12-06 15:40:10.675729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.809 [2024-12-06 15:40:10.864077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.809 [2024-12-06 15:40:11.024603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.068 [2024-12-06 15:40:11.276280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.068 [2024-12-06 15:40:11.276587] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.325 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.325 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.326 [2024-12-06 15:40:11.533189] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:28.326 [2024-12-06 15:40:11.533472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:28.326 [2024-12-06 15:40:11.533651] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:28.326 [2024-12-06 15:40:11.533703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:28.326 [2024-12-06 15:40:11.533733] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:28.326 [2024-12-06 15:40:11.533767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:28.326 [2024-12-06 15:40:11.533899] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:28.326 [2024-12-06 15:40:11.533943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.326 "name": "Existed_Raid", 00:13:28.326 "uuid": "812f51e8-f46f-403a-b250-6bd0a4f22280", 00:13:28.326 "strip_size_kb": 64, 00:13:28.326 "state": "configuring", 00:13:28.326 "raid_level": "raid0", 00:13:28.326 "superblock": true, 00:13:28.326 "num_base_bdevs": 4, 00:13:28.326 "num_base_bdevs_discovered": 0, 00:13:28.326 "num_base_bdevs_operational": 4, 00:13:28.326 "base_bdevs_list": [ 00:13:28.326 { 00:13:28.326 "name": "BaseBdev1", 00:13:28.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.326 "is_configured": false, 00:13:28.326 "data_offset": 0, 00:13:28.326 "data_size": 0 00:13:28.326 }, 00:13:28.326 { 00:13:28.326 "name": "BaseBdev2", 00:13:28.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.326 "is_configured": false, 00:13:28.326 "data_offset": 0, 00:13:28.326 "data_size": 0 00:13:28.326 }, 00:13:28.326 { 00:13:28.326 "name": "BaseBdev3", 00:13:28.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.326 "is_configured": false, 00:13:28.326 "data_offset": 0, 00:13:28.326 "data_size": 0 00:13:28.326 }, 00:13:28.326 { 00:13:28.326 "name": "BaseBdev4", 00:13:28.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.326 "is_configured": false, 00:13:28.326 "data_offset": 0, 00:13:28.326 "data_size": 0 00:13:28.326 } 00:13:28.326 ] 00:13:28.326 }' 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.326 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.893 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:28.893 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.893 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.893 [2024-12-06 15:40:11.964693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:28.893 [2024-12-06 15:40:11.964740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:28.893 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.893 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:28.893 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.893 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.893 [2024-12-06 15:40:11.976690] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:28.893 [2024-12-06 15:40:11.976855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:28.893 [2024-12-06 15:40:11.977001] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:28.893 [2024-12-06 15:40:11.977027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:28.893 [2024-12-06 15:40:11.977036] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:28.893 [2024-12-06 15:40:11.977049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:28.893 [2024-12-06 15:40:11.977058] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:28.893 [2024-12-06 15:40:11.977071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:28.893 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.893 15:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:28.893 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.893 15:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.893 [2024-12-06 15:40:12.029743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.893 BaseBdev1 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.893 [ 00:13:28.893 { 00:13:28.893 "name": "BaseBdev1", 00:13:28.893 "aliases": [ 00:13:28.893 "ad9fe694-c6e3-49f0-b7bf-6672bc5677d6" 00:13:28.893 ], 00:13:28.893 "product_name": "Malloc disk", 00:13:28.893 "block_size": 512, 00:13:28.893 "num_blocks": 65536, 00:13:28.893 "uuid": "ad9fe694-c6e3-49f0-b7bf-6672bc5677d6", 00:13:28.893 "assigned_rate_limits": { 00:13:28.893 "rw_ios_per_sec": 0, 00:13:28.893 "rw_mbytes_per_sec": 0, 00:13:28.893 "r_mbytes_per_sec": 0, 00:13:28.893 "w_mbytes_per_sec": 0 00:13:28.893 }, 00:13:28.893 "claimed": true, 00:13:28.893 "claim_type": "exclusive_write", 00:13:28.893 "zoned": false, 00:13:28.893 "supported_io_types": { 00:13:28.893 "read": true, 00:13:28.893 "write": true, 00:13:28.893 "unmap": true, 00:13:28.893 "flush": true, 00:13:28.893 "reset": true, 00:13:28.893 "nvme_admin": false, 00:13:28.893 "nvme_io": false, 00:13:28.893 "nvme_io_md": false, 00:13:28.893 "write_zeroes": true, 00:13:28.893 "zcopy": true, 00:13:28.893 "get_zone_info": false, 00:13:28.893 "zone_management": false, 00:13:28.893 "zone_append": false, 00:13:28.893 "compare": false, 00:13:28.893 "compare_and_write": false, 00:13:28.893 "abort": true, 00:13:28.893 "seek_hole": false, 00:13:28.893 "seek_data": false, 00:13:28.893 "copy": true, 00:13:28.893 "nvme_iov_md": false 00:13:28.893 }, 00:13:28.893 "memory_domains": [ 00:13:28.893 { 00:13:28.893 "dma_device_id": "system", 00:13:28.893 "dma_device_type": 1 00:13:28.893 }, 00:13:28.893 { 00:13:28.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.893 "dma_device_type": 2 00:13:28.893 } 00:13:28.893 ], 00:13:28.893 "driver_specific": {} 00:13:28.893 } 00:13:28.893 ] 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.893 "name": "Existed_Raid", 00:13:28.893 "uuid": "24aaea16-39d7-4d3d-af48-77aef8ef867a", 00:13:28.893 "strip_size_kb": 64, 00:13:28.893 "state": "configuring", 00:13:28.893 "raid_level": "raid0", 00:13:28.893 "superblock": true, 00:13:28.893 "num_base_bdevs": 4, 00:13:28.893 "num_base_bdevs_discovered": 1, 00:13:28.893 "num_base_bdevs_operational": 4, 00:13:28.893 "base_bdevs_list": [ 00:13:28.893 { 00:13:28.893 "name": "BaseBdev1", 00:13:28.893 "uuid": "ad9fe694-c6e3-49f0-b7bf-6672bc5677d6", 00:13:28.893 "is_configured": true, 00:13:28.893 "data_offset": 2048, 00:13:28.893 "data_size": 63488 00:13:28.893 }, 00:13:28.893 { 00:13:28.893 "name": "BaseBdev2", 00:13:28.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.893 "is_configured": false, 00:13:28.893 "data_offset": 0, 00:13:28.893 "data_size": 0 00:13:28.893 }, 00:13:28.893 { 00:13:28.893 "name": "BaseBdev3", 00:13:28.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.893 "is_configured": false, 00:13:28.893 "data_offset": 0, 00:13:28.893 "data_size": 0 00:13:28.893 }, 00:13:28.893 { 00:13:28.893 "name": "BaseBdev4", 00:13:28.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.893 "is_configured": false, 00:13:28.893 "data_offset": 0, 00:13:28.893 "data_size": 0 00:13:28.893 } 00:13:28.893 ] 00:13:28.893 }' 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.893 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.460 [2024-12-06 15:40:12.465471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:29.460 [2024-12-06 15:40:12.465558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.460 [2024-12-06 15:40:12.477540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.460 [2024-12-06 15:40:12.480195] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:29.460 [2024-12-06 15:40:12.480353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:29.460 [2024-12-06 15:40:12.480495] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:29.460 [2024-12-06 15:40:12.480559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:29.460 [2024-12-06 15:40:12.480590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:29.460 [2024-12-06 15:40:12.480623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.460 "name": "Existed_Raid", 00:13:29.460 "uuid": "76bf4ae3-4466-47e9-af20-e90d042bfea8", 00:13:29.460 "strip_size_kb": 64, 00:13:29.460 "state": "configuring", 00:13:29.460 "raid_level": "raid0", 00:13:29.460 "superblock": true, 00:13:29.460 "num_base_bdevs": 4, 00:13:29.460 "num_base_bdevs_discovered": 1, 00:13:29.460 "num_base_bdevs_operational": 4, 00:13:29.460 "base_bdevs_list": [ 00:13:29.460 { 00:13:29.460 "name": "BaseBdev1", 00:13:29.460 "uuid": "ad9fe694-c6e3-49f0-b7bf-6672bc5677d6", 00:13:29.460 "is_configured": true, 00:13:29.460 "data_offset": 2048, 00:13:29.460 "data_size": 63488 00:13:29.460 }, 00:13:29.460 { 00:13:29.460 "name": "BaseBdev2", 00:13:29.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.460 "is_configured": false, 00:13:29.460 "data_offset": 0, 00:13:29.460 "data_size": 0 00:13:29.460 }, 00:13:29.460 { 00:13:29.460 "name": "BaseBdev3", 00:13:29.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.460 "is_configured": false, 00:13:29.460 "data_offset": 0, 00:13:29.460 "data_size": 0 00:13:29.460 }, 00:13:29.460 { 00:13:29.460 "name": "BaseBdev4", 00:13:29.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.460 "is_configured": false, 00:13:29.460 "data_offset": 0, 00:13:29.460 "data_size": 0 00:13:29.460 } 00:13:29.460 ] 00:13:29.460 }' 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.460 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.719 [2024-12-06 15:40:12.930420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.719 BaseBdev2 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.719 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.719 [ 00:13:29.719 { 00:13:29.719 "name": "BaseBdev2", 00:13:29.719 "aliases": [ 00:13:29.719 "75705e65-7c32-448a-bf71-19373c42cdd9" 00:13:29.719 ], 00:13:29.719 "product_name": "Malloc disk", 00:13:29.719 "block_size": 512, 00:13:29.719 "num_blocks": 65536, 00:13:29.719 "uuid": "75705e65-7c32-448a-bf71-19373c42cdd9", 00:13:29.719 "assigned_rate_limits": { 00:13:29.719 "rw_ios_per_sec": 0, 00:13:29.719 "rw_mbytes_per_sec": 0, 00:13:29.719 "r_mbytes_per_sec": 0, 00:13:29.719 "w_mbytes_per_sec": 0 00:13:29.719 }, 00:13:29.719 "claimed": true, 00:13:29.719 "claim_type": "exclusive_write", 00:13:29.719 "zoned": false, 00:13:29.719 "supported_io_types": { 00:13:29.719 "read": true, 00:13:29.719 "write": true, 00:13:29.719 "unmap": true, 00:13:29.720 "flush": true, 00:13:29.720 "reset": true, 00:13:29.720 "nvme_admin": false, 00:13:29.720 "nvme_io": false, 00:13:29.720 "nvme_io_md": false, 00:13:29.720 "write_zeroes": true, 00:13:29.720 "zcopy": true, 00:13:29.720 "get_zone_info": false, 00:13:29.720 "zone_management": false, 00:13:29.720 "zone_append": false, 00:13:29.720 "compare": false, 00:13:29.720 "compare_and_write": false, 00:13:29.720 "abort": true, 00:13:29.720 "seek_hole": false, 00:13:29.720 "seek_data": false, 00:13:29.720 "copy": true, 00:13:29.720 "nvme_iov_md": false 00:13:29.720 }, 00:13:29.720 "memory_domains": [ 00:13:29.720 { 00:13:29.720 "dma_device_id": "system", 00:13:29.720 "dma_device_type": 1 00:13:29.720 }, 00:13:29.720 { 00:13:29.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.720 "dma_device_type": 2 00:13:29.720 } 00:13:29.720 ], 00:13:29.720 "driver_specific": {} 00:13:29.720 } 00:13:29.720 ] 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.720 15:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.978 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.978 "name": "Existed_Raid", 00:13:29.978 "uuid": "76bf4ae3-4466-47e9-af20-e90d042bfea8", 00:13:29.978 "strip_size_kb": 64, 00:13:29.978 "state": "configuring", 00:13:29.978 "raid_level": "raid0", 00:13:29.978 "superblock": true, 00:13:29.978 "num_base_bdevs": 4, 00:13:29.978 "num_base_bdevs_discovered": 2, 00:13:29.978 "num_base_bdevs_operational": 4, 00:13:29.978 "base_bdevs_list": [ 00:13:29.978 { 00:13:29.978 "name": "BaseBdev1", 00:13:29.978 "uuid": "ad9fe694-c6e3-49f0-b7bf-6672bc5677d6", 00:13:29.978 "is_configured": true, 00:13:29.978 "data_offset": 2048, 00:13:29.978 "data_size": 63488 00:13:29.978 }, 00:13:29.978 { 00:13:29.979 "name": "BaseBdev2", 00:13:29.979 "uuid": "75705e65-7c32-448a-bf71-19373c42cdd9", 00:13:29.979 "is_configured": true, 00:13:29.979 "data_offset": 2048, 00:13:29.979 "data_size": 63488 00:13:29.979 }, 00:13:29.979 { 00:13:29.979 "name": "BaseBdev3", 00:13:29.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.979 "is_configured": false, 00:13:29.979 "data_offset": 0, 00:13:29.979 "data_size": 0 00:13:29.979 }, 00:13:29.979 { 00:13:29.979 "name": "BaseBdev4", 00:13:29.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.979 "is_configured": false, 00:13:29.979 "data_offset": 0, 00:13:29.979 "data_size": 0 00:13:29.979 } 00:13:29.979 ] 00:13:29.979 }' 00:13:29.979 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.979 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.238 [2024-12-06 15:40:13.457046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:30.238 BaseBdev3 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.238 [ 00:13:30.238 { 00:13:30.238 "name": "BaseBdev3", 00:13:30.238 "aliases": [ 00:13:30.238 "52525ec4-6f19-4ca8-b3f8-b94af482a239" 00:13:30.238 ], 00:13:30.238 "product_name": "Malloc disk", 00:13:30.238 "block_size": 512, 00:13:30.238 "num_blocks": 65536, 00:13:30.238 "uuid": "52525ec4-6f19-4ca8-b3f8-b94af482a239", 00:13:30.238 "assigned_rate_limits": { 00:13:30.238 "rw_ios_per_sec": 0, 00:13:30.238 "rw_mbytes_per_sec": 0, 00:13:30.238 "r_mbytes_per_sec": 0, 00:13:30.238 "w_mbytes_per_sec": 0 00:13:30.238 }, 00:13:30.238 "claimed": true, 00:13:30.238 "claim_type": "exclusive_write", 00:13:30.238 "zoned": false, 00:13:30.238 "supported_io_types": { 00:13:30.238 "read": true, 00:13:30.238 "write": true, 00:13:30.238 "unmap": true, 00:13:30.238 "flush": true, 00:13:30.238 "reset": true, 00:13:30.238 "nvme_admin": false, 00:13:30.238 "nvme_io": false, 00:13:30.238 "nvme_io_md": false, 00:13:30.238 "write_zeroes": true, 00:13:30.238 "zcopy": true, 00:13:30.238 "get_zone_info": false, 00:13:30.238 "zone_management": false, 00:13:30.238 "zone_append": false, 00:13:30.238 "compare": false, 00:13:30.238 "compare_and_write": false, 00:13:30.238 "abort": true, 00:13:30.238 "seek_hole": false, 00:13:30.238 "seek_data": false, 00:13:30.238 "copy": true, 00:13:30.238 "nvme_iov_md": false 00:13:30.238 }, 00:13:30.238 "memory_domains": [ 00:13:30.238 { 00:13:30.238 "dma_device_id": "system", 00:13:30.238 "dma_device_type": 1 00:13:30.238 }, 00:13:30.238 { 00:13:30.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.238 "dma_device_type": 2 00:13:30.238 } 00:13:30.238 ], 00:13:30.238 "driver_specific": {} 00:13:30.238 } 00:13:30.238 ] 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.238 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.497 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.497 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.497 "name": "Existed_Raid", 00:13:30.497 "uuid": "76bf4ae3-4466-47e9-af20-e90d042bfea8", 00:13:30.497 "strip_size_kb": 64, 00:13:30.497 "state": "configuring", 00:13:30.497 "raid_level": "raid0", 00:13:30.497 "superblock": true, 00:13:30.497 "num_base_bdevs": 4, 00:13:30.497 "num_base_bdevs_discovered": 3, 00:13:30.497 "num_base_bdevs_operational": 4, 00:13:30.497 "base_bdevs_list": [ 00:13:30.497 { 00:13:30.497 "name": "BaseBdev1", 00:13:30.497 "uuid": "ad9fe694-c6e3-49f0-b7bf-6672bc5677d6", 00:13:30.497 "is_configured": true, 00:13:30.497 "data_offset": 2048, 00:13:30.497 "data_size": 63488 00:13:30.497 }, 00:13:30.497 { 00:13:30.497 "name": "BaseBdev2", 00:13:30.497 "uuid": "75705e65-7c32-448a-bf71-19373c42cdd9", 00:13:30.497 "is_configured": true, 00:13:30.497 "data_offset": 2048, 00:13:30.497 "data_size": 63488 00:13:30.497 }, 00:13:30.497 { 00:13:30.497 "name": "BaseBdev3", 00:13:30.497 "uuid": "52525ec4-6f19-4ca8-b3f8-b94af482a239", 00:13:30.497 "is_configured": true, 00:13:30.497 "data_offset": 2048, 00:13:30.497 "data_size": 63488 00:13:30.497 }, 00:13:30.497 { 00:13:30.497 "name": "BaseBdev4", 00:13:30.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.497 "is_configured": false, 00:13:30.497 "data_offset": 0, 00:13:30.497 "data_size": 0 00:13:30.497 } 00:13:30.497 ] 00:13:30.497 }' 00:13:30.497 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.497 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.756 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:30.756 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.756 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.756 [2024-12-06 15:40:13.977957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:30.756 [2024-12-06 15:40:13.978633] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:30.756 [2024-12-06 15:40:13.978668] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:30.756 [2024-12-06 15:40:13.979012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:30.756 BaseBdev4 00:13:30.756 [2024-12-06 15:40:13.979182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:30.756 [2024-12-06 15:40:13.979196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:30.756 [2024-12-06 15:40:13.979361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.756 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.756 15:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:30.756 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:30.756 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.756 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.756 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.757 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.757 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.757 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.757 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.757 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.757 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:30.757 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.757 15:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.757 [ 00:13:30.757 { 00:13:30.757 "name": "BaseBdev4", 00:13:30.757 "aliases": [ 00:13:30.757 "849bb060-7121-44a9-b75b-1bc21d35b4eb" 00:13:30.757 ], 00:13:30.757 "product_name": "Malloc disk", 00:13:30.757 "block_size": 512, 00:13:30.757 "num_blocks": 65536, 00:13:30.757 "uuid": "849bb060-7121-44a9-b75b-1bc21d35b4eb", 00:13:30.757 "assigned_rate_limits": { 00:13:30.757 "rw_ios_per_sec": 0, 00:13:30.757 "rw_mbytes_per_sec": 0, 00:13:30.757 "r_mbytes_per_sec": 0, 00:13:30.757 "w_mbytes_per_sec": 0 00:13:30.757 }, 00:13:30.757 "claimed": true, 00:13:30.757 "claim_type": "exclusive_write", 00:13:30.757 "zoned": false, 00:13:30.757 "supported_io_types": { 00:13:30.757 "read": true, 00:13:30.757 "write": true, 00:13:30.757 "unmap": true, 00:13:30.757 "flush": true, 00:13:30.757 "reset": true, 00:13:30.757 "nvme_admin": false, 00:13:30.757 "nvme_io": false, 00:13:30.757 "nvme_io_md": false, 00:13:30.757 "write_zeroes": true, 00:13:30.757 "zcopy": true, 00:13:30.757 "get_zone_info": false, 00:13:30.757 "zone_management": false, 00:13:30.757 "zone_append": false, 00:13:30.757 "compare": false, 00:13:30.757 "compare_and_write": false, 00:13:30.757 "abort": true, 00:13:30.757 "seek_hole": false, 00:13:30.757 "seek_data": false, 00:13:30.757 "copy": true, 00:13:30.757 "nvme_iov_md": false 00:13:30.757 }, 00:13:30.757 "memory_domains": [ 00:13:30.757 { 00:13:30.757 "dma_device_id": "system", 00:13:30.757 "dma_device_type": 1 00:13:30.757 }, 00:13:30.757 { 00:13:30.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.757 "dma_device_type": 2 00:13:30.757 } 00:13:30.757 ], 00:13:30.757 "driver_specific": {} 00:13:30.757 } 00:13:30.757 ] 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.757 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.017 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.017 "name": "Existed_Raid", 00:13:31.017 "uuid": "76bf4ae3-4466-47e9-af20-e90d042bfea8", 00:13:31.017 "strip_size_kb": 64, 00:13:31.017 "state": "online", 00:13:31.017 "raid_level": "raid0", 00:13:31.017 "superblock": true, 00:13:31.017 "num_base_bdevs": 4, 00:13:31.017 "num_base_bdevs_discovered": 4, 00:13:31.017 "num_base_bdevs_operational": 4, 00:13:31.017 "base_bdevs_list": [ 00:13:31.017 { 00:13:31.017 "name": "BaseBdev1", 00:13:31.017 "uuid": "ad9fe694-c6e3-49f0-b7bf-6672bc5677d6", 00:13:31.017 "is_configured": true, 00:13:31.017 "data_offset": 2048, 00:13:31.017 "data_size": 63488 00:13:31.017 }, 00:13:31.017 { 00:13:31.017 "name": "BaseBdev2", 00:13:31.017 "uuid": "75705e65-7c32-448a-bf71-19373c42cdd9", 00:13:31.017 "is_configured": true, 00:13:31.017 "data_offset": 2048, 00:13:31.017 "data_size": 63488 00:13:31.017 }, 00:13:31.017 { 00:13:31.017 "name": "BaseBdev3", 00:13:31.017 "uuid": "52525ec4-6f19-4ca8-b3f8-b94af482a239", 00:13:31.017 "is_configured": true, 00:13:31.017 "data_offset": 2048, 00:13:31.017 "data_size": 63488 00:13:31.017 }, 00:13:31.017 { 00:13:31.017 "name": "BaseBdev4", 00:13:31.017 "uuid": "849bb060-7121-44a9-b75b-1bc21d35b4eb", 00:13:31.017 "is_configured": true, 00:13:31.017 "data_offset": 2048, 00:13:31.017 "data_size": 63488 00:13:31.017 } 00:13:31.017 ] 00:13:31.017 }' 00:13:31.017 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.017 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.276 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:31.276 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:31.276 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:31.276 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:31.276 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:31.276 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:31.276 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:31.276 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:31.276 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.276 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.276 [2024-12-06 15:40:14.426050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.276 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.276 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:31.277 "name": "Existed_Raid", 00:13:31.277 "aliases": [ 00:13:31.277 "76bf4ae3-4466-47e9-af20-e90d042bfea8" 00:13:31.277 ], 00:13:31.277 "product_name": "Raid Volume", 00:13:31.277 "block_size": 512, 00:13:31.277 "num_blocks": 253952, 00:13:31.277 "uuid": "76bf4ae3-4466-47e9-af20-e90d042bfea8", 00:13:31.277 "assigned_rate_limits": { 00:13:31.277 "rw_ios_per_sec": 0, 00:13:31.277 "rw_mbytes_per_sec": 0, 00:13:31.277 "r_mbytes_per_sec": 0, 00:13:31.277 "w_mbytes_per_sec": 0 00:13:31.277 }, 00:13:31.277 "claimed": false, 00:13:31.277 "zoned": false, 00:13:31.277 "supported_io_types": { 00:13:31.277 "read": true, 00:13:31.277 "write": true, 00:13:31.277 "unmap": true, 00:13:31.277 "flush": true, 00:13:31.277 "reset": true, 00:13:31.277 "nvme_admin": false, 00:13:31.277 "nvme_io": false, 00:13:31.277 "nvme_io_md": false, 00:13:31.277 "write_zeroes": true, 00:13:31.277 "zcopy": false, 00:13:31.277 "get_zone_info": false, 00:13:31.277 "zone_management": false, 00:13:31.277 "zone_append": false, 00:13:31.277 "compare": false, 00:13:31.277 "compare_and_write": false, 00:13:31.277 "abort": false, 00:13:31.277 "seek_hole": false, 00:13:31.277 "seek_data": false, 00:13:31.277 "copy": false, 00:13:31.277 "nvme_iov_md": false 00:13:31.277 }, 00:13:31.277 "memory_domains": [ 00:13:31.277 { 00:13:31.277 "dma_device_id": "system", 00:13:31.277 "dma_device_type": 1 00:13:31.277 }, 00:13:31.277 { 00:13:31.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.277 "dma_device_type": 2 00:13:31.277 }, 00:13:31.277 { 00:13:31.277 "dma_device_id": "system", 00:13:31.277 "dma_device_type": 1 00:13:31.277 }, 00:13:31.277 { 00:13:31.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.277 "dma_device_type": 2 00:13:31.277 }, 00:13:31.277 { 00:13:31.277 "dma_device_id": "system", 00:13:31.277 "dma_device_type": 1 00:13:31.277 }, 00:13:31.277 { 00:13:31.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.277 "dma_device_type": 2 00:13:31.277 }, 00:13:31.277 { 00:13:31.277 "dma_device_id": "system", 00:13:31.277 "dma_device_type": 1 00:13:31.277 }, 00:13:31.277 { 00:13:31.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.277 "dma_device_type": 2 00:13:31.277 } 00:13:31.277 ], 00:13:31.277 "driver_specific": { 00:13:31.277 "raid": { 00:13:31.277 "uuid": "76bf4ae3-4466-47e9-af20-e90d042bfea8", 00:13:31.277 "strip_size_kb": 64, 00:13:31.277 "state": "online", 00:13:31.277 "raid_level": "raid0", 00:13:31.277 "superblock": true, 00:13:31.277 "num_base_bdevs": 4, 00:13:31.277 "num_base_bdevs_discovered": 4, 00:13:31.277 "num_base_bdevs_operational": 4, 00:13:31.277 "base_bdevs_list": [ 00:13:31.277 { 00:13:31.277 "name": "BaseBdev1", 00:13:31.277 "uuid": "ad9fe694-c6e3-49f0-b7bf-6672bc5677d6", 00:13:31.277 "is_configured": true, 00:13:31.277 "data_offset": 2048, 00:13:31.277 "data_size": 63488 00:13:31.277 }, 00:13:31.277 { 00:13:31.277 "name": "BaseBdev2", 00:13:31.277 "uuid": "75705e65-7c32-448a-bf71-19373c42cdd9", 00:13:31.277 "is_configured": true, 00:13:31.277 "data_offset": 2048, 00:13:31.277 "data_size": 63488 00:13:31.277 }, 00:13:31.277 { 00:13:31.277 "name": "BaseBdev3", 00:13:31.277 "uuid": "52525ec4-6f19-4ca8-b3f8-b94af482a239", 00:13:31.277 "is_configured": true, 00:13:31.277 "data_offset": 2048, 00:13:31.277 "data_size": 63488 00:13:31.277 }, 00:13:31.277 { 00:13:31.277 "name": "BaseBdev4", 00:13:31.277 "uuid": "849bb060-7121-44a9-b75b-1bc21d35b4eb", 00:13:31.277 "is_configured": true, 00:13:31.277 "data_offset": 2048, 00:13:31.277 "data_size": 63488 00:13:31.277 } 00:13:31.277 ] 00:13:31.277 } 00:13:31.277 } 00:13:31.277 }' 00:13:31.277 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:31.277 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:31.277 BaseBdev2 00:13:31.277 BaseBdev3 00:13:31.277 BaseBdev4' 00:13:31.277 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.277 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:31.277 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.277 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:31.277 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.277 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.277 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.537 [2024-12-06 15:40:14.717720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:31.537 [2024-12-06 15:40:14.717760] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.537 [2024-12-06 15:40:14.717825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:31.537 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:31.797 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.797 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.797 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.797 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.797 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.797 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.797 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.797 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.797 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.797 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.797 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.797 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.797 "name": "Existed_Raid", 00:13:31.797 "uuid": "76bf4ae3-4466-47e9-af20-e90d042bfea8", 00:13:31.797 "strip_size_kb": 64, 00:13:31.797 "state": "offline", 00:13:31.797 "raid_level": "raid0", 00:13:31.797 "superblock": true, 00:13:31.797 "num_base_bdevs": 4, 00:13:31.797 "num_base_bdevs_discovered": 3, 00:13:31.797 "num_base_bdevs_operational": 3, 00:13:31.797 "base_bdevs_list": [ 00:13:31.797 { 00:13:31.797 "name": null, 00:13:31.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.797 "is_configured": false, 00:13:31.797 "data_offset": 0, 00:13:31.797 "data_size": 63488 00:13:31.797 }, 00:13:31.797 { 00:13:31.797 "name": "BaseBdev2", 00:13:31.797 "uuid": "75705e65-7c32-448a-bf71-19373c42cdd9", 00:13:31.797 "is_configured": true, 00:13:31.797 "data_offset": 2048, 00:13:31.797 "data_size": 63488 00:13:31.797 }, 00:13:31.797 { 00:13:31.797 "name": "BaseBdev3", 00:13:31.797 "uuid": "52525ec4-6f19-4ca8-b3f8-b94af482a239", 00:13:31.797 "is_configured": true, 00:13:31.797 "data_offset": 2048, 00:13:31.797 "data_size": 63488 00:13:31.798 }, 00:13:31.798 { 00:13:31.798 "name": "BaseBdev4", 00:13:31.798 "uuid": "849bb060-7121-44a9-b75b-1bc21d35b4eb", 00:13:31.798 "is_configured": true, 00:13:31.798 "data_offset": 2048, 00:13:31.798 "data_size": 63488 00:13:31.798 } 00:13:31.798 ] 00:13:31.798 }' 00:13:31.798 15:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.798 15:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.075 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:32.075 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.075 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:32.075 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.075 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.075 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.075 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.075 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:32.075 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:32.075 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:32.075 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.075 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.075 [2024-12-06 15:40:15.259522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.333 [2024-12-06 15:40:15.422664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.333 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.333 [2024-12-06 15:40:15.584198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:32.333 [2024-12-06 15:40:15.584266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.650 BaseBdev2 00:13:32.650 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.651 [ 00:13:32.651 { 00:13:32.651 "name": "BaseBdev2", 00:13:32.651 "aliases": [ 00:13:32.651 "0355ccfc-15d0-4951-b0fa-c2c0aeee63e3" 00:13:32.651 ], 00:13:32.651 "product_name": "Malloc disk", 00:13:32.651 "block_size": 512, 00:13:32.651 "num_blocks": 65536, 00:13:32.651 "uuid": "0355ccfc-15d0-4951-b0fa-c2c0aeee63e3", 00:13:32.651 "assigned_rate_limits": { 00:13:32.651 "rw_ios_per_sec": 0, 00:13:32.651 "rw_mbytes_per_sec": 0, 00:13:32.651 "r_mbytes_per_sec": 0, 00:13:32.651 "w_mbytes_per_sec": 0 00:13:32.651 }, 00:13:32.651 "claimed": false, 00:13:32.651 "zoned": false, 00:13:32.651 "supported_io_types": { 00:13:32.651 "read": true, 00:13:32.651 "write": true, 00:13:32.651 "unmap": true, 00:13:32.651 "flush": true, 00:13:32.651 "reset": true, 00:13:32.651 "nvme_admin": false, 00:13:32.651 "nvme_io": false, 00:13:32.651 "nvme_io_md": false, 00:13:32.651 "write_zeroes": true, 00:13:32.651 "zcopy": true, 00:13:32.651 "get_zone_info": false, 00:13:32.651 "zone_management": false, 00:13:32.651 "zone_append": false, 00:13:32.651 "compare": false, 00:13:32.651 "compare_and_write": false, 00:13:32.651 "abort": true, 00:13:32.651 "seek_hole": false, 00:13:32.651 "seek_data": false, 00:13:32.651 "copy": true, 00:13:32.651 "nvme_iov_md": false 00:13:32.651 }, 00:13:32.651 "memory_domains": [ 00:13:32.651 { 00:13:32.651 "dma_device_id": "system", 00:13:32.651 "dma_device_type": 1 00:13:32.651 }, 00:13:32.651 { 00:13:32.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.651 "dma_device_type": 2 00:13:32.651 } 00:13:32.651 ], 00:13:32.651 "driver_specific": {} 00:13:32.651 } 00:13:32.651 ] 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.651 BaseBdev3 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.651 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.651 [ 00:13:32.651 { 00:13:32.651 "name": "BaseBdev3", 00:13:32.651 "aliases": [ 00:13:32.651 "edb551b4-75b5-4654-8a32-58adf9c592ea" 00:13:32.651 ], 00:13:32.651 "product_name": "Malloc disk", 00:13:32.651 "block_size": 512, 00:13:32.651 "num_blocks": 65536, 00:13:32.911 "uuid": "edb551b4-75b5-4654-8a32-58adf9c592ea", 00:13:32.911 "assigned_rate_limits": { 00:13:32.911 "rw_ios_per_sec": 0, 00:13:32.911 "rw_mbytes_per_sec": 0, 00:13:32.911 "r_mbytes_per_sec": 0, 00:13:32.912 "w_mbytes_per_sec": 0 00:13:32.912 }, 00:13:32.912 "claimed": false, 00:13:32.912 "zoned": false, 00:13:32.912 "supported_io_types": { 00:13:32.912 "read": true, 00:13:32.912 "write": true, 00:13:32.912 "unmap": true, 00:13:32.912 "flush": true, 00:13:32.912 "reset": true, 00:13:32.912 "nvme_admin": false, 00:13:32.912 "nvme_io": false, 00:13:32.912 "nvme_io_md": false, 00:13:32.912 "write_zeroes": true, 00:13:32.912 "zcopy": true, 00:13:32.912 "get_zone_info": false, 00:13:32.912 "zone_management": false, 00:13:32.912 "zone_append": false, 00:13:32.912 "compare": false, 00:13:32.912 "compare_and_write": false, 00:13:32.912 "abort": true, 00:13:32.912 "seek_hole": false, 00:13:32.912 "seek_data": false, 00:13:32.912 "copy": true, 00:13:32.912 "nvme_iov_md": false 00:13:32.912 }, 00:13:32.912 "memory_domains": [ 00:13:32.912 { 00:13:32.912 "dma_device_id": "system", 00:13:32.912 "dma_device_type": 1 00:13:32.912 }, 00:13:32.912 { 00:13:32.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.912 "dma_device_type": 2 00:13:32.912 } 00:13:32.912 ], 00:13:32.912 "driver_specific": {} 00:13:32.912 } 00:13:32.912 ] 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.912 BaseBdev4 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.912 15:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.912 [ 00:13:32.912 { 00:13:32.912 "name": "BaseBdev4", 00:13:32.912 "aliases": [ 00:13:32.912 "7b139712-52de-4d8e-8dad-7e3f3089df6f" 00:13:32.912 ], 00:13:32.912 "product_name": "Malloc disk", 00:13:32.912 "block_size": 512, 00:13:32.912 "num_blocks": 65536, 00:13:32.912 "uuid": "7b139712-52de-4d8e-8dad-7e3f3089df6f", 00:13:32.912 "assigned_rate_limits": { 00:13:32.912 "rw_ios_per_sec": 0, 00:13:32.912 "rw_mbytes_per_sec": 0, 00:13:32.912 "r_mbytes_per_sec": 0, 00:13:32.912 "w_mbytes_per_sec": 0 00:13:32.912 }, 00:13:32.912 "claimed": false, 00:13:32.912 "zoned": false, 00:13:32.912 "supported_io_types": { 00:13:32.912 "read": true, 00:13:32.912 "write": true, 00:13:32.912 "unmap": true, 00:13:32.912 "flush": true, 00:13:32.912 "reset": true, 00:13:32.912 "nvme_admin": false, 00:13:32.912 "nvme_io": false, 00:13:32.912 "nvme_io_md": false, 00:13:32.912 "write_zeroes": true, 00:13:32.912 "zcopy": true, 00:13:32.912 "get_zone_info": false, 00:13:32.912 "zone_management": false, 00:13:32.912 "zone_append": false, 00:13:32.912 "compare": false, 00:13:32.912 "compare_and_write": false, 00:13:32.912 "abort": true, 00:13:32.912 "seek_hole": false, 00:13:32.912 "seek_data": false, 00:13:32.912 "copy": true, 00:13:32.912 "nvme_iov_md": false 00:13:32.912 }, 00:13:32.912 "memory_domains": [ 00:13:32.912 { 00:13:32.912 "dma_device_id": "system", 00:13:32.912 "dma_device_type": 1 00:13:32.912 }, 00:13:32.912 { 00:13:32.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.912 "dma_device_type": 2 00:13:32.912 } 00:13:32.912 ], 00:13:32.912 "driver_specific": {} 00:13:32.912 } 00:13:32.912 ] 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.912 [2024-12-06 15:40:16.042780] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:32.912 [2024-12-06 15:40:16.042837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:32.912 [2024-12-06 15:40:16.042863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:32.912 [2024-12-06 15:40:16.045256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:32.912 [2024-12-06 15:40:16.045311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.912 "name": "Existed_Raid", 00:13:32.912 "uuid": "a203e0b5-eb54-4bce-935f-4db61ea450ad", 00:13:32.912 "strip_size_kb": 64, 00:13:32.912 "state": "configuring", 00:13:32.912 "raid_level": "raid0", 00:13:32.912 "superblock": true, 00:13:32.912 "num_base_bdevs": 4, 00:13:32.912 "num_base_bdevs_discovered": 3, 00:13:32.912 "num_base_bdevs_operational": 4, 00:13:32.912 "base_bdevs_list": [ 00:13:32.912 { 00:13:32.912 "name": "BaseBdev1", 00:13:32.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.912 "is_configured": false, 00:13:32.912 "data_offset": 0, 00:13:32.912 "data_size": 0 00:13:32.912 }, 00:13:32.912 { 00:13:32.912 "name": "BaseBdev2", 00:13:32.912 "uuid": "0355ccfc-15d0-4951-b0fa-c2c0aeee63e3", 00:13:32.912 "is_configured": true, 00:13:32.912 "data_offset": 2048, 00:13:32.912 "data_size": 63488 00:13:32.912 }, 00:13:32.912 { 00:13:32.912 "name": "BaseBdev3", 00:13:32.912 "uuid": "edb551b4-75b5-4654-8a32-58adf9c592ea", 00:13:32.912 "is_configured": true, 00:13:32.912 "data_offset": 2048, 00:13:32.912 "data_size": 63488 00:13:32.912 }, 00:13:32.912 { 00:13:32.912 "name": "BaseBdev4", 00:13:32.912 "uuid": "7b139712-52de-4d8e-8dad-7e3f3089df6f", 00:13:32.912 "is_configured": true, 00:13:32.912 "data_offset": 2048, 00:13:32.912 "data_size": 63488 00:13:32.912 } 00:13:32.912 ] 00:13:32.912 }' 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.912 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.170 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:33.170 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.170 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.170 [2024-12-06 15:40:16.446389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:33.170 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.171 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:33.171 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.171 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.171 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:33.171 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.171 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.171 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.171 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.171 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.171 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.171 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.171 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.171 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.171 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.429 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.429 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.429 "name": "Existed_Raid", 00:13:33.429 "uuid": "a203e0b5-eb54-4bce-935f-4db61ea450ad", 00:13:33.429 "strip_size_kb": 64, 00:13:33.429 "state": "configuring", 00:13:33.429 "raid_level": "raid0", 00:13:33.429 "superblock": true, 00:13:33.429 "num_base_bdevs": 4, 00:13:33.429 "num_base_bdevs_discovered": 2, 00:13:33.429 "num_base_bdevs_operational": 4, 00:13:33.429 "base_bdevs_list": [ 00:13:33.429 { 00:13:33.429 "name": "BaseBdev1", 00:13:33.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.429 "is_configured": false, 00:13:33.429 "data_offset": 0, 00:13:33.429 "data_size": 0 00:13:33.429 }, 00:13:33.429 { 00:13:33.429 "name": null, 00:13:33.429 "uuid": "0355ccfc-15d0-4951-b0fa-c2c0aeee63e3", 00:13:33.429 "is_configured": false, 00:13:33.429 "data_offset": 0, 00:13:33.429 "data_size": 63488 00:13:33.429 }, 00:13:33.429 { 00:13:33.429 "name": "BaseBdev3", 00:13:33.429 "uuid": "edb551b4-75b5-4654-8a32-58adf9c592ea", 00:13:33.429 "is_configured": true, 00:13:33.429 "data_offset": 2048, 00:13:33.429 "data_size": 63488 00:13:33.429 }, 00:13:33.429 { 00:13:33.429 "name": "BaseBdev4", 00:13:33.429 "uuid": "7b139712-52de-4d8e-8dad-7e3f3089df6f", 00:13:33.429 "is_configured": true, 00:13:33.429 "data_offset": 2048, 00:13:33.429 "data_size": 63488 00:13:33.429 } 00:13:33.429 ] 00:13:33.429 }' 00:13:33.429 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.429 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.688 [2024-12-06 15:40:16.974390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.688 BaseBdev1 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.688 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.946 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.947 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:33.947 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.947 15:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.947 [ 00:13:33.947 { 00:13:33.947 "name": "BaseBdev1", 00:13:33.947 "aliases": [ 00:13:33.947 "83e56de2-ce29-40c5-b8a2-bdc5d59c6656" 00:13:33.947 ], 00:13:33.947 "product_name": "Malloc disk", 00:13:33.947 "block_size": 512, 00:13:33.947 "num_blocks": 65536, 00:13:33.947 "uuid": "83e56de2-ce29-40c5-b8a2-bdc5d59c6656", 00:13:33.947 "assigned_rate_limits": { 00:13:33.947 "rw_ios_per_sec": 0, 00:13:33.947 "rw_mbytes_per_sec": 0, 00:13:33.947 "r_mbytes_per_sec": 0, 00:13:33.947 "w_mbytes_per_sec": 0 00:13:33.947 }, 00:13:33.947 "claimed": true, 00:13:33.947 "claim_type": "exclusive_write", 00:13:33.947 "zoned": false, 00:13:33.947 "supported_io_types": { 00:13:33.947 "read": true, 00:13:33.947 "write": true, 00:13:33.947 "unmap": true, 00:13:33.947 "flush": true, 00:13:33.947 "reset": true, 00:13:33.947 "nvme_admin": false, 00:13:33.947 "nvme_io": false, 00:13:33.947 "nvme_io_md": false, 00:13:33.947 "write_zeroes": true, 00:13:33.947 "zcopy": true, 00:13:33.947 "get_zone_info": false, 00:13:33.947 "zone_management": false, 00:13:33.947 "zone_append": false, 00:13:33.947 "compare": false, 00:13:33.947 "compare_and_write": false, 00:13:33.947 "abort": true, 00:13:33.947 "seek_hole": false, 00:13:33.947 "seek_data": false, 00:13:33.947 "copy": true, 00:13:33.947 "nvme_iov_md": false 00:13:33.947 }, 00:13:33.947 "memory_domains": [ 00:13:33.947 { 00:13:33.947 "dma_device_id": "system", 00:13:33.947 "dma_device_type": 1 00:13:33.947 }, 00:13:33.947 { 00:13:33.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.947 "dma_device_type": 2 00:13:33.947 } 00:13:33.947 ], 00:13:33.947 "driver_specific": {} 00:13:33.947 } 00:13:33.947 ] 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.947 "name": "Existed_Raid", 00:13:33.947 "uuid": "a203e0b5-eb54-4bce-935f-4db61ea450ad", 00:13:33.947 "strip_size_kb": 64, 00:13:33.947 "state": "configuring", 00:13:33.947 "raid_level": "raid0", 00:13:33.947 "superblock": true, 00:13:33.947 "num_base_bdevs": 4, 00:13:33.947 "num_base_bdevs_discovered": 3, 00:13:33.947 "num_base_bdevs_operational": 4, 00:13:33.947 "base_bdevs_list": [ 00:13:33.947 { 00:13:33.947 "name": "BaseBdev1", 00:13:33.947 "uuid": "83e56de2-ce29-40c5-b8a2-bdc5d59c6656", 00:13:33.947 "is_configured": true, 00:13:33.947 "data_offset": 2048, 00:13:33.947 "data_size": 63488 00:13:33.947 }, 00:13:33.947 { 00:13:33.947 "name": null, 00:13:33.947 "uuid": "0355ccfc-15d0-4951-b0fa-c2c0aeee63e3", 00:13:33.947 "is_configured": false, 00:13:33.947 "data_offset": 0, 00:13:33.947 "data_size": 63488 00:13:33.947 }, 00:13:33.947 { 00:13:33.947 "name": "BaseBdev3", 00:13:33.947 "uuid": "edb551b4-75b5-4654-8a32-58adf9c592ea", 00:13:33.947 "is_configured": true, 00:13:33.947 "data_offset": 2048, 00:13:33.947 "data_size": 63488 00:13:33.947 }, 00:13:33.947 { 00:13:33.947 "name": "BaseBdev4", 00:13:33.947 "uuid": "7b139712-52de-4d8e-8dad-7e3f3089df6f", 00:13:33.947 "is_configured": true, 00:13:33.947 "data_offset": 2048, 00:13:33.947 "data_size": 63488 00:13:33.947 } 00:13:33.947 ] 00:13:33.947 }' 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.947 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.205 [2024-12-06 15:40:17.453905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.205 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.206 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.206 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.206 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.206 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.206 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.206 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.206 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.206 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.206 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.206 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.206 "name": "Existed_Raid", 00:13:34.206 "uuid": "a203e0b5-eb54-4bce-935f-4db61ea450ad", 00:13:34.206 "strip_size_kb": 64, 00:13:34.206 "state": "configuring", 00:13:34.206 "raid_level": "raid0", 00:13:34.206 "superblock": true, 00:13:34.206 "num_base_bdevs": 4, 00:13:34.206 "num_base_bdevs_discovered": 2, 00:13:34.206 "num_base_bdevs_operational": 4, 00:13:34.206 "base_bdevs_list": [ 00:13:34.206 { 00:13:34.206 "name": "BaseBdev1", 00:13:34.206 "uuid": "83e56de2-ce29-40c5-b8a2-bdc5d59c6656", 00:13:34.206 "is_configured": true, 00:13:34.206 "data_offset": 2048, 00:13:34.206 "data_size": 63488 00:13:34.206 }, 00:13:34.206 { 00:13:34.206 "name": null, 00:13:34.206 "uuid": "0355ccfc-15d0-4951-b0fa-c2c0aeee63e3", 00:13:34.206 "is_configured": false, 00:13:34.206 "data_offset": 0, 00:13:34.206 "data_size": 63488 00:13:34.206 }, 00:13:34.206 { 00:13:34.206 "name": null, 00:13:34.206 "uuid": "edb551b4-75b5-4654-8a32-58adf9c592ea", 00:13:34.206 "is_configured": false, 00:13:34.206 "data_offset": 0, 00:13:34.206 "data_size": 63488 00:13:34.206 }, 00:13:34.206 { 00:13:34.206 "name": "BaseBdev4", 00:13:34.206 "uuid": "7b139712-52de-4d8e-8dad-7e3f3089df6f", 00:13:34.206 "is_configured": true, 00:13:34.206 "data_offset": 2048, 00:13:34.206 "data_size": 63488 00:13:34.206 } 00:13:34.206 ] 00:13:34.206 }' 00:13:34.206 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.206 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.773 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.773 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.773 15:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:34.773 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.773 15:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.773 [2024-12-06 15:40:18.013126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.773 "name": "Existed_Raid", 00:13:34.773 "uuid": "a203e0b5-eb54-4bce-935f-4db61ea450ad", 00:13:34.773 "strip_size_kb": 64, 00:13:34.773 "state": "configuring", 00:13:34.773 "raid_level": "raid0", 00:13:34.773 "superblock": true, 00:13:34.773 "num_base_bdevs": 4, 00:13:34.773 "num_base_bdevs_discovered": 3, 00:13:34.773 "num_base_bdevs_operational": 4, 00:13:34.773 "base_bdevs_list": [ 00:13:34.773 { 00:13:34.773 "name": "BaseBdev1", 00:13:34.773 "uuid": "83e56de2-ce29-40c5-b8a2-bdc5d59c6656", 00:13:34.773 "is_configured": true, 00:13:34.773 "data_offset": 2048, 00:13:34.773 "data_size": 63488 00:13:34.773 }, 00:13:34.773 { 00:13:34.773 "name": null, 00:13:34.773 "uuid": "0355ccfc-15d0-4951-b0fa-c2c0aeee63e3", 00:13:34.773 "is_configured": false, 00:13:34.773 "data_offset": 0, 00:13:34.773 "data_size": 63488 00:13:34.773 }, 00:13:34.773 { 00:13:34.773 "name": "BaseBdev3", 00:13:34.773 "uuid": "edb551b4-75b5-4654-8a32-58adf9c592ea", 00:13:34.773 "is_configured": true, 00:13:34.773 "data_offset": 2048, 00:13:34.773 "data_size": 63488 00:13:34.773 }, 00:13:34.773 { 00:13:34.773 "name": "BaseBdev4", 00:13:34.773 "uuid": "7b139712-52de-4d8e-8dad-7e3f3089df6f", 00:13:34.773 "is_configured": true, 00:13:34.773 "data_offset": 2048, 00:13:34.773 "data_size": 63488 00:13:34.773 } 00:13:34.773 ] 00:13:34.773 }' 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.773 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.339 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.339 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:35.339 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.339 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.339 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.339 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:35.339 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:35.339 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.339 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.339 [2024-12-06 15:40:18.492676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.339 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.339 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:35.339 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.339 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.339 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:35.340 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.340 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.340 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.340 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.340 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.340 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.340 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.340 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.340 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.340 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.598 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.598 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.598 "name": "Existed_Raid", 00:13:35.598 "uuid": "a203e0b5-eb54-4bce-935f-4db61ea450ad", 00:13:35.598 "strip_size_kb": 64, 00:13:35.598 "state": "configuring", 00:13:35.598 "raid_level": "raid0", 00:13:35.598 "superblock": true, 00:13:35.598 "num_base_bdevs": 4, 00:13:35.598 "num_base_bdevs_discovered": 2, 00:13:35.598 "num_base_bdevs_operational": 4, 00:13:35.598 "base_bdevs_list": [ 00:13:35.598 { 00:13:35.598 "name": null, 00:13:35.598 "uuid": "83e56de2-ce29-40c5-b8a2-bdc5d59c6656", 00:13:35.598 "is_configured": false, 00:13:35.598 "data_offset": 0, 00:13:35.598 "data_size": 63488 00:13:35.598 }, 00:13:35.598 { 00:13:35.598 "name": null, 00:13:35.598 "uuid": "0355ccfc-15d0-4951-b0fa-c2c0aeee63e3", 00:13:35.598 "is_configured": false, 00:13:35.598 "data_offset": 0, 00:13:35.598 "data_size": 63488 00:13:35.598 }, 00:13:35.598 { 00:13:35.598 "name": "BaseBdev3", 00:13:35.598 "uuid": "edb551b4-75b5-4654-8a32-58adf9c592ea", 00:13:35.598 "is_configured": true, 00:13:35.598 "data_offset": 2048, 00:13:35.598 "data_size": 63488 00:13:35.598 }, 00:13:35.598 { 00:13:35.598 "name": "BaseBdev4", 00:13:35.598 "uuid": "7b139712-52de-4d8e-8dad-7e3f3089df6f", 00:13:35.598 "is_configured": true, 00:13:35.598 "data_offset": 2048, 00:13:35.598 "data_size": 63488 00:13:35.598 } 00:13:35.598 ] 00:13:35.598 }' 00:13:35.598 15:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.598 15:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.856 [2024-12-06 15:40:19.055887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.856 "name": "Existed_Raid", 00:13:35.856 "uuid": "a203e0b5-eb54-4bce-935f-4db61ea450ad", 00:13:35.856 "strip_size_kb": 64, 00:13:35.856 "state": "configuring", 00:13:35.856 "raid_level": "raid0", 00:13:35.856 "superblock": true, 00:13:35.856 "num_base_bdevs": 4, 00:13:35.856 "num_base_bdevs_discovered": 3, 00:13:35.856 "num_base_bdevs_operational": 4, 00:13:35.856 "base_bdevs_list": [ 00:13:35.856 { 00:13:35.856 "name": null, 00:13:35.856 "uuid": "83e56de2-ce29-40c5-b8a2-bdc5d59c6656", 00:13:35.856 "is_configured": false, 00:13:35.856 "data_offset": 0, 00:13:35.856 "data_size": 63488 00:13:35.856 }, 00:13:35.856 { 00:13:35.856 "name": "BaseBdev2", 00:13:35.856 "uuid": "0355ccfc-15d0-4951-b0fa-c2c0aeee63e3", 00:13:35.856 "is_configured": true, 00:13:35.856 "data_offset": 2048, 00:13:35.856 "data_size": 63488 00:13:35.856 }, 00:13:35.856 { 00:13:35.856 "name": "BaseBdev3", 00:13:35.856 "uuid": "edb551b4-75b5-4654-8a32-58adf9c592ea", 00:13:35.856 "is_configured": true, 00:13:35.856 "data_offset": 2048, 00:13:35.856 "data_size": 63488 00:13:35.856 }, 00:13:35.856 { 00:13:35.856 "name": "BaseBdev4", 00:13:35.856 "uuid": "7b139712-52de-4d8e-8dad-7e3f3089df6f", 00:13:35.856 "is_configured": true, 00:13:35.856 "data_offset": 2048, 00:13:35.856 "data_size": 63488 00:13:35.856 } 00:13:35.856 ] 00:13:35.856 }' 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.856 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 83e56de2-ce29-40c5-b8a2-bdc5d59c6656 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.422 [2024-12-06 15:40:19.583774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:36.422 [2024-12-06 15:40:19.584058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:36.422 [2024-12-06 15:40:19.584075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:36.422 NewBaseBdev 00:13:36.422 [2024-12-06 15:40:19.584402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:36.422 [2024-12-06 15:40:19.584590] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:36.422 [2024-12-06 15:40:19.584612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:36.422 [2024-12-06 15:40:19.584763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.422 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.422 [ 00:13:36.422 { 00:13:36.422 "name": "NewBaseBdev", 00:13:36.422 "aliases": [ 00:13:36.422 "83e56de2-ce29-40c5-b8a2-bdc5d59c6656" 00:13:36.422 ], 00:13:36.422 "product_name": "Malloc disk", 00:13:36.422 "block_size": 512, 00:13:36.422 "num_blocks": 65536, 00:13:36.422 "uuid": "83e56de2-ce29-40c5-b8a2-bdc5d59c6656", 00:13:36.422 "assigned_rate_limits": { 00:13:36.422 "rw_ios_per_sec": 0, 00:13:36.422 "rw_mbytes_per_sec": 0, 00:13:36.422 "r_mbytes_per_sec": 0, 00:13:36.422 "w_mbytes_per_sec": 0 00:13:36.422 }, 00:13:36.422 "claimed": true, 00:13:36.422 "claim_type": "exclusive_write", 00:13:36.422 "zoned": false, 00:13:36.422 "supported_io_types": { 00:13:36.422 "read": true, 00:13:36.422 "write": true, 00:13:36.422 "unmap": true, 00:13:36.422 "flush": true, 00:13:36.422 "reset": true, 00:13:36.422 "nvme_admin": false, 00:13:36.422 "nvme_io": false, 00:13:36.422 "nvme_io_md": false, 00:13:36.422 "write_zeroes": true, 00:13:36.422 "zcopy": true, 00:13:36.422 "get_zone_info": false, 00:13:36.422 "zone_management": false, 00:13:36.422 "zone_append": false, 00:13:36.422 "compare": false, 00:13:36.422 "compare_and_write": false, 00:13:36.422 "abort": true, 00:13:36.422 "seek_hole": false, 00:13:36.422 "seek_data": false, 00:13:36.422 "copy": true, 00:13:36.422 "nvme_iov_md": false 00:13:36.422 }, 00:13:36.422 "memory_domains": [ 00:13:36.422 { 00:13:36.422 "dma_device_id": "system", 00:13:36.422 "dma_device_type": 1 00:13:36.422 }, 00:13:36.422 { 00:13:36.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.422 "dma_device_type": 2 00:13:36.422 } 00:13:36.422 ], 00:13:36.422 "driver_specific": {} 00:13:36.422 } 00:13:36.422 ] 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.423 "name": "Existed_Raid", 00:13:36.423 "uuid": "a203e0b5-eb54-4bce-935f-4db61ea450ad", 00:13:36.423 "strip_size_kb": 64, 00:13:36.423 "state": "online", 00:13:36.423 "raid_level": "raid0", 00:13:36.423 "superblock": true, 00:13:36.423 "num_base_bdevs": 4, 00:13:36.423 "num_base_bdevs_discovered": 4, 00:13:36.423 "num_base_bdevs_operational": 4, 00:13:36.423 "base_bdevs_list": [ 00:13:36.423 { 00:13:36.423 "name": "NewBaseBdev", 00:13:36.423 "uuid": "83e56de2-ce29-40c5-b8a2-bdc5d59c6656", 00:13:36.423 "is_configured": true, 00:13:36.423 "data_offset": 2048, 00:13:36.423 "data_size": 63488 00:13:36.423 }, 00:13:36.423 { 00:13:36.423 "name": "BaseBdev2", 00:13:36.423 "uuid": "0355ccfc-15d0-4951-b0fa-c2c0aeee63e3", 00:13:36.423 "is_configured": true, 00:13:36.423 "data_offset": 2048, 00:13:36.423 "data_size": 63488 00:13:36.423 }, 00:13:36.423 { 00:13:36.423 "name": "BaseBdev3", 00:13:36.423 "uuid": "edb551b4-75b5-4654-8a32-58adf9c592ea", 00:13:36.423 "is_configured": true, 00:13:36.423 "data_offset": 2048, 00:13:36.423 "data_size": 63488 00:13:36.423 }, 00:13:36.423 { 00:13:36.423 "name": "BaseBdev4", 00:13:36.423 "uuid": "7b139712-52de-4d8e-8dad-7e3f3089df6f", 00:13:36.423 "is_configured": true, 00:13:36.423 "data_offset": 2048, 00:13:36.423 "data_size": 63488 00:13:36.423 } 00:13:36.423 ] 00:13:36.423 }' 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.423 15:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.990 [2024-12-06 15:40:20.048053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:36.990 "name": "Existed_Raid", 00:13:36.990 "aliases": [ 00:13:36.990 "a203e0b5-eb54-4bce-935f-4db61ea450ad" 00:13:36.990 ], 00:13:36.990 "product_name": "Raid Volume", 00:13:36.990 "block_size": 512, 00:13:36.990 "num_blocks": 253952, 00:13:36.990 "uuid": "a203e0b5-eb54-4bce-935f-4db61ea450ad", 00:13:36.990 "assigned_rate_limits": { 00:13:36.990 "rw_ios_per_sec": 0, 00:13:36.990 "rw_mbytes_per_sec": 0, 00:13:36.990 "r_mbytes_per_sec": 0, 00:13:36.990 "w_mbytes_per_sec": 0 00:13:36.990 }, 00:13:36.990 "claimed": false, 00:13:36.990 "zoned": false, 00:13:36.990 "supported_io_types": { 00:13:36.990 "read": true, 00:13:36.990 "write": true, 00:13:36.990 "unmap": true, 00:13:36.990 "flush": true, 00:13:36.990 "reset": true, 00:13:36.990 "nvme_admin": false, 00:13:36.990 "nvme_io": false, 00:13:36.990 "nvme_io_md": false, 00:13:36.990 "write_zeroes": true, 00:13:36.990 "zcopy": false, 00:13:36.990 "get_zone_info": false, 00:13:36.990 "zone_management": false, 00:13:36.990 "zone_append": false, 00:13:36.990 "compare": false, 00:13:36.990 "compare_and_write": false, 00:13:36.990 "abort": false, 00:13:36.990 "seek_hole": false, 00:13:36.990 "seek_data": false, 00:13:36.990 "copy": false, 00:13:36.990 "nvme_iov_md": false 00:13:36.990 }, 00:13:36.990 "memory_domains": [ 00:13:36.990 { 00:13:36.990 "dma_device_id": "system", 00:13:36.990 "dma_device_type": 1 00:13:36.990 }, 00:13:36.990 { 00:13:36.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.990 "dma_device_type": 2 00:13:36.990 }, 00:13:36.990 { 00:13:36.990 "dma_device_id": "system", 00:13:36.990 "dma_device_type": 1 00:13:36.990 }, 00:13:36.990 { 00:13:36.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.990 "dma_device_type": 2 00:13:36.990 }, 00:13:36.990 { 00:13:36.990 "dma_device_id": "system", 00:13:36.990 "dma_device_type": 1 00:13:36.990 }, 00:13:36.990 { 00:13:36.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.990 "dma_device_type": 2 00:13:36.990 }, 00:13:36.990 { 00:13:36.990 "dma_device_id": "system", 00:13:36.990 "dma_device_type": 1 00:13:36.990 }, 00:13:36.990 { 00:13:36.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.990 "dma_device_type": 2 00:13:36.990 } 00:13:36.990 ], 00:13:36.990 "driver_specific": { 00:13:36.990 "raid": { 00:13:36.990 "uuid": "a203e0b5-eb54-4bce-935f-4db61ea450ad", 00:13:36.990 "strip_size_kb": 64, 00:13:36.990 "state": "online", 00:13:36.990 "raid_level": "raid0", 00:13:36.990 "superblock": true, 00:13:36.990 "num_base_bdevs": 4, 00:13:36.990 "num_base_bdevs_discovered": 4, 00:13:36.990 "num_base_bdevs_operational": 4, 00:13:36.990 "base_bdevs_list": [ 00:13:36.990 { 00:13:36.990 "name": "NewBaseBdev", 00:13:36.990 "uuid": "83e56de2-ce29-40c5-b8a2-bdc5d59c6656", 00:13:36.990 "is_configured": true, 00:13:36.990 "data_offset": 2048, 00:13:36.990 "data_size": 63488 00:13:36.990 }, 00:13:36.990 { 00:13:36.990 "name": "BaseBdev2", 00:13:36.990 "uuid": "0355ccfc-15d0-4951-b0fa-c2c0aeee63e3", 00:13:36.990 "is_configured": true, 00:13:36.990 "data_offset": 2048, 00:13:36.990 "data_size": 63488 00:13:36.990 }, 00:13:36.990 { 00:13:36.990 "name": "BaseBdev3", 00:13:36.990 "uuid": "edb551b4-75b5-4654-8a32-58adf9c592ea", 00:13:36.990 "is_configured": true, 00:13:36.990 "data_offset": 2048, 00:13:36.990 "data_size": 63488 00:13:36.990 }, 00:13:36.990 { 00:13:36.990 "name": "BaseBdev4", 00:13:36.990 "uuid": "7b139712-52de-4d8e-8dad-7e3f3089df6f", 00:13:36.990 "is_configured": true, 00:13:36.990 "data_offset": 2048, 00:13:36.990 "data_size": 63488 00:13:36.990 } 00:13:36.990 ] 00:13:36.990 } 00:13:36.990 } 00:13:36.990 }' 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:36.990 BaseBdev2 00:13:36.990 BaseBdev3 00:13:36.990 BaseBdev4' 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.990 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.991 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.991 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:36.991 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.991 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.991 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.991 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.991 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.991 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.991 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.991 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:36.991 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.991 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.991 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.250 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.250 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.250 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.250 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.250 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:37.250 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.250 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.250 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.250 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.250 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.250 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.250 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:37.250 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.251 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.251 [2024-12-06 15:40:20.367156] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:37.251 [2024-12-06 15:40:20.367201] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.251 [2024-12-06 15:40:20.367297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.251 [2024-12-06 15:40:20.367388] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.251 [2024-12-06 15:40:20.367400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:37.251 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.251 15:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70070 00:13:37.251 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70070 ']' 00:13:37.251 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70070 00:13:37.251 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:37.251 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.251 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70070 00:13:37.251 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:37.251 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:37.251 killing process with pid 70070 00:13:37.251 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70070' 00:13:37.251 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70070 00:13:37.251 [2024-12-06 15:40:20.420145] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.251 15:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70070 00:13:37.819 [2024-12-06 15:40:20.863536] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.215 15:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:39.215 00:13:39.215 real 0m11.566s 00:13:39.215 user 0m17.975s 00:13:39.215 sys 0m2.400s 00:13:39.215 15:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.215 15:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.215 ************************************ 00:13:39.215 END TEST raid_state_function_test_sb 00:13:39.215 ************************************ 00:13:39.215 15:40:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:13:39.215 15:40:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:39.215 15:40:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.215 15:40:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.215 ************************************ 00:13:39.215 START TEST raid_superblock_test 00:13:39.215 ************************************ 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70748 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70748 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70748 ']' 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.215 15:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.215 [2024-12-06 15:40:22.310022] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:13:39.215 [2024-12-06 15:40:22.310184] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70748 ] 00:13:39.215 [2024-12-06 15:40:22.494191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.503 [2024-12-06 15:40:22.637974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.762 [2024-12-06 15:40:22.889766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.762 [2024-12-06 15:40:22.889851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.023 malloc1 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.023 [2024-12-06 15:40:23.208577] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:40.023 [2024-12-06 15:40:23.208645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.023 [2024-12-06 15:40:23.208674] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:40.023 [2024-12-06 15:40:23.208687] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.023 [2024-12-06 15:40:23.211396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.023 [2024-12-06 15:40:23.211434] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:40.023 pt1 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.023 malloc2 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.023 [2024-12-06 15:40:23.267669] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:40.023 [2024-12-06 15:40:23.267727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.023 [2024-12-06 15:40:23.267761] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:40.023 [2024-12-06 15:40:23.267774] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.023 [2024-12-06 15:40:23.270582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.023 [2024-12-06 15:40:23.270615] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:40.023 pt2 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.023 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.284 malloc3 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.284 [2024-12-06 15:40:23.344932] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:40.284 [2024-12-06 15:40:23.344987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.284 [2024-12-06 15:40:23.345015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:40.284 [2024-12-06 15:40:23.345027] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.284 [2024-12-06 15:40:23.347709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.284 [2024-12-06 15:40:23.347745] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:40.284 pt3 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.284 malloc4 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.284 [2024-12-06 15:40:23.411223] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:40.284 [2024-12-06 15:40:23.411290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.284 [2024-12-06 15:40:23.411315] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:40.284 [2024-12-06 15:40:23.411328] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.284 [2024-12-06 15:40:23.414047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.284 [2024-12-06 15:40:23.414082] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:40.284 pt4 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.284 [2024-12-06 15:40:23.423242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:40.284 [2024-12-06 15:40:23.425632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:40.284 [2024-12-06 15:40:23.425730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:40.284 [2024-12-06 15:40:23.425777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:40.284 [2024-12-06 15:40:23.425973] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:40.284 [2024-12-06 15:40:23.425985] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:40.284 [2024-12-06 15:40:23.426282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:40.284 [2024-12-06 15:40:23.426467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:40.284 [2024-12-06 15:40:23.426482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:40.284 [2024-12-06 15:40:23.426661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.284 "name": "raid_bdev1", 00:13:40.284 "uuid": "912a516b-92c7-4f51-873c-d03e6e93337e", 00:13:40.284 "strip_size_kb": 64, 00:13:40.284 "state": "online", 00:13:40.284 "raid_level": "raid0", 00:13:40.284 "superblock": true, 00:13:40.284 "num_base_bdevs": 4, 00:13:40.284 "num_base_bdevs_discovered": 4, 00:13:40.284 "num_base_bdevs_operational": 4, 00:13:40.284 "base_bdevs_list": [ 00:13:40.284 { 00:13:40.284 "name": "pt1", 00:13:40.284 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.284 "is_configured": true, 00:13:40.284 "data_offset": 2048, 00:13:40.284 "data_size": 63488 00:13:40.284 }, 00:13:40.284 { 00:13:40.284 "name": "pt2", 00:13:40.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.284 "is_configured": true, 00:13:40.284 "data_offset": 2048, 00:13:40.284 "data_size": 63488 00:13:40.284 }, 00:13:40.284 { 00:13:40.284 "name": "pt3", 00:13:40.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.284 "is_configured": true, 00:13:40.284 "data_offset": 2048, 00:13:40.284 "data_size": 63488 00:13:40.284 }, 00:13:40.284 { 00:13:40.284 "name": "pt4", 00:13:40.284 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:40.284 "is_configured": true, 00:13:40.284 "data_offset": 2048, 00:13:40.284 "data_size": 63488 00:13:40.284 } 00:13:40.284 ] 00:13:40.284 }' 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.284 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.544 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:40.544 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:40.544 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:40.544 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:40.544 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:40.544 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:40.544 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:40.544 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:40.544 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.544 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.544 [2024-12-06 15:40:23.807062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.804 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.804 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:40.804 "name": "raid_bdev1", 00:13:40.804 "aliases": [ 00:13:40.804 "912a516b-92c7-4f51-873c-d03e6e93337e" 00:13:40.804 ], 00:13:40.804 "product_name": "Raid Volume", 00:13:40.804 "block_size": 512, 00:13:40.804 "num_blocks": 253952, 00:13:40.804 "uuid": "912a516b-92c7-4f51-873c-d03e6e93337e", 00:13:40.804 "assigned_rate_limits": { 00:13:40.804 "rw_ios_per_sec": 0, 00:13:40.804 "rw_mbytes_per_sec": 0, 00:13:40.804 "r_mbytes_per_sec": 0, 00:13:40.804 "w_mbytes_per_sec": 0 00:13:40.804 }, 00:13:40.804 "claimed": false, 00:13:40.804 "zoned": false, 00:13:40.804 "supported_io_types": { 00:13:40.804 "read": true, 00:13:40.804 "write": true, 00:13:40.804 "unmap": true, 00:13:40.804 "flush": true, 00:13:40.804 "reset": true, 00:13:40.804 "nvme_admin": false, 00:13:40.804 "nvme_io": false, 00:13:40.804 "nvme_io_md": false, 00:13:40.804 "write_zeroes": true, 00:13:40.804 "zcopy": false, 00:13:40.804 "get_zone_info": false, 00:13:40.804 "zone_management": false, 00:13:40.804 "zone_append": false, 00:13:40.804 "compare": false, 00:13:40.804 "compare_and_write": false, 00:13:40.804 "abort": false, 00:13:40.804 "seek_hole": false, 00:13:40.804 "seek_data": false, 00:13:40.804 "copy": false, 00:13:40.804 "nvme_iov_md": false 00:13:40.804 }, 00:13:40.804 "memory_domains": [ 00:13:40.804 { 00:13:40.804 "dma_device_id": "system", 00:13:40.804 "dma_device_type": 1 00:13:40.804 }, 00:13:40.804 { 00:13:40.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.804 "dma_device_type": 2 00:13:40.804 }, 00:13:40.804 { 00:13:40.804 "dma_device_id": "system", 00:13:40.804 "dma_device_type": 1 00:13:40.804 }, 00:13:40.804 { 00:13:40.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.804 "dma_device_type": 2 00:13:40.804 }, 00:13:40.804 { 00:13:40.804 "dma_device_id": "system", 00:13:40.804 "dma_device_type": 1 00:13:40.804 }, 00:13:40.804 { 00:13:40.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.804 "dma_device_type": 2 00:13:40.804 }, 00:13:40.804 { 00:13:40.804 "dma_device_id": "system", 00:13:40.804 "dma_device_type": 1 00:13:40.804 }, 00:13:40.804 { 00:13:40.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.804 "dma_device_type": 2 00:13:40.804 } 00:13:40.804 ], 00:13:40.804 "driver_specific": { 00:13:40.804 "raid": { 00:13:40.804 "uuid": "912a516b-92c7-4f51-873c-d03e6e93337e", 00:13:40.804 "strip_size_kb": 64, 00:13:40.804 "state": "online", 00:13:40.804 "raid_level": "raid0", 00:13:40.804 "superblock": true, 00:13:40.804 "num_base_bdevs": 4, 00:13:40.804 "num_base_bdevs_discovered": 4, 00:13:40.804 "num_base_bdevs_operational": 4, 00:13:40.804 "base_bdevs_list": [ 00:13:40.804 { 00:13:40.804 "name": "pt1", 00:13:40.804 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.804 "is_configured": true, 00:13:40.804 "data_offset": 2048, 00:13:40.804 "data_size": 63488 00:13:40.804 }, 00:13:40.804 { 00:13:40.804 "name": "pt2", 00:13:40.805 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.805 "is_configured": true, 00:13:40.805 "data_offset": 2048, 00:13:40.805 "data_size": 63488 00:13:40.805 }, 00:13:40.805 { 00:13:40.805 "name": "pt3", 00:13:40.805 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.805 "is_configured": true, 00:13:40.805 "data_offset": 2048, 00:13:40.805 "data_size": 63488 00:13:40.805 }, 00:13:40.805 { 00:13:40.805 "name": "pt4", 00:13:40.805 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:40.805 "is_configured": true, 00:13:40.805 "data_offset": 2048, 00:13:40.805 "data_size": 63488 00:13:40.805 } 00:13:40.805 ] 00:13:40.805 } 00:13:40.805 } 00:13:40.805 }' 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:40.805 pt2 00:13:40.805 pt3 00:13:40.805 pt4' 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.805 15:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.805 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:41.065 [2024-12-06 15:40:24.098930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=912a516b-92c7-4f51-873c-d03e6e93337e 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 912a516b-92c7-4f51-873c-d03e6e93337e ']' 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.065 [2024-12-06 15:40:24.138637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.065 [2024-12-06 15:40:24.138673] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.065 [2024-12-06 15:40:24.138774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.065 [2024-12-06 15:40:24.138859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.065 [2024-12-06 15:40:24.138879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:41.065 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 [2024-12-06 15:40:24.290679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:41.066 [2024-12-06 15:40:24.293146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:41.066 [2024-12-06 15:40:24.293208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:41.066 [2024-12-06 15:40:24.293247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:41.066 [2024-12-06 15:40:24.293305] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:41.066 [2024-12-06 15:40:24.293358] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:41.066 [2024-12-06 15:40:24.293381] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:41.066 [2024-12-06 15:40:24.293403] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:41.066 [2024-12-06 15:40:24.293420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.066 [2024-12-06 15:40:24.293438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:41.066 request: 00:13:41.066 { 00:13:41.066 "name": "raid_bdev1", 00:13:41.066 "raid_level": "raid0", 00:13:41.066 "base_bdevs": [ 00:13:41.066 "malloc1", 00:13:41.066 "malloc2", 00:13:41.066 "malloc3", 00:13:41.066 "malloc4" 00:13:41.066 ], 00:13:41.066 "strip_size_kb": 64, 00:13:41.066 "superblock": false, 00:13:41.066 "method": "bdev_raid_create", 00:13:41.066 "req_id": 1 00:13:41.066 } 00:13:41.066 Got JSON-RPC error response 00:13:41.066 response: 00:13:41.066 { 00:13:41.066 "code": -17, 00:13:41.066 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:41.066 } 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 [2024-12-06 15:40:24.350638] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:41.066 [2024-12-06 15:40:24.350696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.066 [2024-12-06 15:40:24.350722] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:41.066 [2024-12-06 15:40:24.350737] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.066 [2024-12-06 15:40:24.353617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.066 [2024-12-06 15:40:24.353659] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:41.066 [2024-12-06 15:40:24.353755] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:41.066 [2024-12-06 15:40:24.353818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:41.066 pt1 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.066 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.325 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.325 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.325 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.325 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.325 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.325 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.325 "name": "raid_bdev1", 00:13:41.325 "uuid": "912a516b-92c7-4f51-873c-d03e6e93337e", 00:13:41.325 "strip_size_kb": 64, 00:13:41.325 "state": "configuring", 00:13:41.325 "raid_level": "raid0", 00:13:41.325 "superblock": true, 00:13:41.325 "num_base_bdevs": 4, 00:13:41.325 "num_base_bdevs_discovered": 1, 00:13:41.325 "num_base_bdevs_operational": 4, 00:13:41.325 "base_bdevs_list": [ 00:13:41.325 { 00:13:41.325 "name": "pt1", 00:13:41.325 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.325 "is_configured": true, 00:13:41.325 "data_offset": 2048, 00:13:41.325 "data_size": 63488 00:13:41.325 }, 00:13:41.325 { 00:13:41.325 "name": null, 00:13:41.325 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.325 "is_configured": false, 00:13:41.325 "data_offset": 2048, 00:13:41.325 "data_size": 63488 00:13:41.325 }, 00:13:41.325 { 00:13:41.325 "name": null, 00:13:41.325 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.325 "is_configured": false, 00:13:41.325 "data_offset": 2048, 00:13:41.325 "data_size": 63488 00:13:41.325 }, 00:13:41.325 { 00:13:41.325 "name": null, 00:13:41.325 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:41.325 "is_configured": false, 00:13:41.325 "data_offset": 2048, 00:13:41.325 "data_size": 63488 00:13:41.325 } 00:13:41.325 ] 00:13:41.325 }' 00:13:41.325 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.325 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.585 [2024-12-06 15:40:24.774705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:41.585 [2024-12-06 15:40:24.774802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.585 [2024-12-06 15:40:24.774829] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:41.585 [2024-12-06 15:40:24.774845] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.585 [2024-12-06 15:40:24.775416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.585 [2024-12-06 15:40:24.775447] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:41.585 [2024-12-06 15:40:24.775566] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:41.585 [2024-12-06 15:40:24.775600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:41.585 pt2 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.585 [2024-12-06 15:40:24.786704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.585 "name": "raid_bdev1", 00:13:41.585 "uuid": "912a516b-92c7-4f51-873c-d03e6e93337e", 00:13:41.585 "strip_size_kb": 64, 00:13:41.585 "state": "configuring", 00:13:41.585 "raid_level": "raid0", 00:13:41.585 "superblock": true, 00:13:41.585 "num_base_bdevs": 4, 00:13:41.585 "num_base_bdevs_discovered": 1, 00:13:41.585 "num_base_bdevs_operational": 4, 00:13:41.585 "base_bdevs_list": [ 00:13:41.585 { 00:13:41.585 "name": "pt1", 00:13:41.585 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.585 "is_configured": true, 00:13:41.585 "data_offset": 2048, 00:13:41.585 "data_size": 63488 00:13:41.585 }, 00:13:41.585 { 00:13:41.585 "name": null, 00:13:41.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.585 "is_configured": false, 00:13:41.585 "data_offset": 0, 00:13:41.585 "data_size": 63488 00:13:41.585 }, 00:13:41.585 { 00:13:41.585 "name": null, 00:13:41.585 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.585 "is_configured": false, 00:13:41.585 "data_offset": 2048, 00:13:41.585 "data_size": 63488 00:13:41.585 }, 00:13:41.585 { 00:13:41.585 "name": null, 00:13:41.585 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:41.585 "is_configured": false, 00:13:41.585 "data_offset": 2048, 00:13:41.585 "data_size": 63488 00:13:41.585 } 00:13:41.585 ] 00:13:41.585 }' 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.585 15:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.152 [2024-12-06 15:40:25.206722] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:42.152 [2024-12-06 15:40:25.206811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.152 [2024-12-06 15:40:25.206839] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:42.152 [2024-12-06 15:40:25.206852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.152 [2024-12-06 15:40:25.207416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.152 [2024-12-06 15:40:25.207452] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:42.152 [2024-12-06 15:40:25.207578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:42.152 [2024-12-06 15:40:25.207607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:42.152 pt2 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.152 [2024-12-06 15:40:25.218659] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:42.152 [2024-12-06 15:40:25.218722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.152 [2024-12-06 15:40:25.218747] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:42.152 [2024-12-06 15:40:25.218758] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.152 [2024-12-06 15:40:25.219231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.152 [2024-12-06 15:40:25.219250] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:42.152 [2024-12-06 15:40:25.219333] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:42.152 [2024-12-06 15:40:25.219363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:42.152 pt3 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.152 [2024-12-06 15:40:25.230620] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:42.152 [2024-12-06 15:40:25.230668] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.152 [2024-12-06 15:40:25.230688] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:42.152 [2024-12-06 15:40:25.230700] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.152 [2024-12-06 15:40:25.231139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.152 [2024-12-06 15:40:25.231161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:42.152 [2024-12-06 15:40:25.231235] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:42.152 [2024-12-06 15:40:25.231261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:42.152 [2024-12-06 15:40:25.231412] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:42.152 [2024-12-06 15:40:25.231422] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:42.152 [2024-12-06 15:40:25.231704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:42.152 [2024-12-06 15:40:25.231859] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:42.152 [2024-12-06 15:40:25.231874] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:42.152 [2024-12-06 15:40:25.232034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.152 pt4 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.152 "name": "raid_bdev1", 00:13:42.152 "uuid": "912a516b-92c7-4f51-873c-d03e6e93337e", 00:13:42.152 "strip_size_kb": 64, 00:13:42.152 "state": "online", 00:13:42.152 "raid_level": "raid0", 00:13:42.152 "superblock": true, 00:13:42.152 "num_base_bdevs": 4, 00:13:42.152 "num_base_bdevs_discovered": 4, 00:13:42.152 "num_base_bdevs_operational": 4, 00:13:42.152 "base_bdevs_list": [ 00:13:42.152 { 00:13:42.152 "name": "pt1", 00:13:42.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:42.152 "is_configured": true, 00:13:42.152 "data_offset": 2048, 00:13:42.152 "data_size": 63488 00:13:42.152 }, 00:13:42.152 { 00:13:42.152 "name": "pt2", 00:13:42.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.152 "is_configured": true, 00:13:42.152 "data_offset": 2048, 00:13:42.152 "data_size": 63488 00:13:42.152 }, 00:13:42.152 { 00:13:42.152 "name": "pt3", 00:13:42.152 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.152 "is_configured": true, 00:13:42.152 "data_offset": 2048, 00:13:42.152 "data_size": 63488 00:13:42.152 }, 00:13:42.152 { 00:13:42.152 "name": "pt4", 00:13:42.152 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:42.152 "is_configured": true, 00:13:42.152 "data_offset": 2048, 00:13:42.152 "data_size": 63488 00:13:42.152 } 00:13:42.152 ] 00:13:42.152 }' 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.152 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.411 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:42.411 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:42.411 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:42.411 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:42.411 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:42.411 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:42.411 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:42.411 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:42.411 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.411 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.411 [2024-12-06 15:40:25.666737] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.411 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.411 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:42.411 "name": "raid_bdev1", 00:13:42.411 "aliases": [ 00:13:42.411 "912a516b-92c7-4f51-873c-d03e6e93337e" 00:13:42.411 ], 00:13:42.411 "product_name": "Raid Volume", 00:13:42.411 "block_size": 512, 00:13:42.411 "num_blocks": 253952, 00:13:42.411 "uuid": "912a516b-92c7-4f51-873c-d03e6e93337e", 00:13:42.411 "assigned_rate_limits": { 00:13:42.412 "rw_ios_per_sec": 0, 00:13:42.412 "rw_mbytes_per_sec": 0, 00:13:42.412 "r_mbytes_per_sec": 0, 00:13:42.412 "w_mbytes_per_sec": 0 00:13:42.412 }, 00:13:42.412 "claimed": false, 00:13:42.412 "zoned": false, 00:13:42.412 "supported_io_types": { 00:13:42.412 "read": true, 00:13:42.412 "write": true, 00:13:42.412 "unmap": true, 00:13:42.412 "flush": true, 00:13:42.412 "reset": true, 00:13:42.412 "nvme_admin": false, 00:13:42.412 "nvme_io": false, 00:13:42.412 "nvme_io_md": false, 00:13:42.412 "write_zeroes": true, 00:13:42.412 "zcopy": false, 00:13:42.412 "get_zone_info": false, 00:13:42.412 "zone_management": false, 00:13:42.412 "zone_append": false, 00:13:42.412 "compare": false, 00:13:42.412 "compare_and_write": false, 00:13:42.412 "abort": false, 00:13:42.412 "seek_hole": false, 00:13:42.412 "seek_data": false, 00:13:42.412 "copy": false, 00:13:42.412 "nvme_iov_md": false 00:13:42.412 }, 00:13:42.412 "memory_domains": [ 00:13:42.412 { 00:13:42.412 "dma_device_id": "system", 00:13:42.412 "dma_device_type": 1 00:13:42.412 }, 00:13:42.412 { 00:13:42.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.412 "dma_device_type": 2 00:13:42.412 }, 00:13:42.412 { 00:13:42.412 "dma_device_id": "system", 00:13:42.412 "dma_device_type": 1 00:13:42.412 }, 00:13:42.412 { 00:13:42.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.412 "dma_device_type": 2 00:13:42.412 }, 00:13:42.412 { 00:13:42.412 "dma_device_id": "system", 00:13:42.412 "dma_device_type": 1 00:13:42.412 }, 00:13:42.412 { 00:13:42.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.412 "dma_device_type": 2 00:13:42.412 }, 00:13:42.412 { 00:13:42.412 "dma_device_id": "system", 00:13:42.412 "dma_device_type": 1 00:13:42.412 }, 00:13:42.412 { 00:13:42.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.412 "dma_device_type": 2 00:13:42.412 } 00:13:42.412 ], 00:13:42.412 "driver_specific": { 00:13:42.412 "raid": { 00:13:42.412 "uuid": "912a516b-92c7-4f51-873c-d03e6e93337e", 00:13:42.412 "strip_size_kb": 64, 00:13:42.412 "state": "online", 00:13:42.412 "raid_level": "raid0", 00:13:42.412 "superblock": true, 00:13:42.412 "num_base_bdevs": 4, 00:13:42.412 "num_base_bdevs_discovered": 4, 00:13:42.412 "num_base_bdevs_operational": 4, 00:13:42.412 "base_bdevs_list": [ 00:13:42.412 { 00:13:42.412 "name": "pt1", 00:13:42.412 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:42.412 "is_configured": true, 00:13:42.412 "data_offset": 2048, 00:13:42.412 "data_size": 63488 00:13:42.412 }, 00:13:42.412 { 00:13:42.412 "name": "pt2", 00:13:42.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.412 "is_configured": true, 00:13:42.412 "data_offset": 2048, 00:13:42.412 "data_size": 63488 00:13:42.412 }, 00:13:42.412 { 00:13:42.412 "name": "pt3", 00:13:42.412 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.412 "is_configured": true, 00:13:42.412 "data_offset": 2048, 00:13:42.412 "data_size": 63488 00:13:42.412 }, 00:13:42.412 { 00:13:42.412 "name": "pt4", 00:13:42.412 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:42.412 "is_configured": true, 00:13:42.412 "data_offset": 2048, 00:13:42.412 "data_size": 63488 00:13:42.412 } 00:13:42.412 ] 00:13:42.412 } 00:13:42.412 } 00:13:42.412 }' 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:42.671 pt2 00:13:42.671 pt3 00:13:42.671 pt4' 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.671 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.929 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.929 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.929 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:42.929 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.929 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.929 15:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:42.929 [2024-12-06 15:40:25.974656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.929 15:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.929 15:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 912a516b-92c7-4f51-873c-d03e6e93337e '!=' 912a516b-92c7-4f51-873c-d03e6e93337e ']' 00:13:42.929 15:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:42.929 15:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:42.929 15:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:42.929 15:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70748 00:13:42.929 15:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70748 ']' 00:13:42.929 15:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70748 00:13:42.929 15:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:42.929 15:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.929 15:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70748 00:13:42.929 15:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:42.929 15:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:42.929 killing process with pid 70748 00:13:42.929 15:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70748' 00:13:42.929 15:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70748 00:13:42.929 15:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70748 00:13:42.929 [2024-12-06 15:40:26.064366] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:42.929 [2024-12-06 15:40:26.064487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.929 [2024-12-06 15:40:26.064599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.929 [2024-12-06 15:40:26.064613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:43.495 [2024-12-06 15:40:26.540742] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.871 15:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:44.871 00:13:44.871 real 0m5.678s 00:13:44.871 user 0m7.755s 00:13:44.871 sys 0m1.168s 00:13:44.871 15:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.871 15:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.871 ************************************ 00:13:44.871 END TEST raid_superblock_test 00:13:44.871 ************************************ 00:13:44.871 15:40:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:13:44.871 15:40:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:44.871 15:40:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.871 15:40:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.871 ************************************ 00:13:44.871 START TEST raid_read_error_test 00:13:44.871 ************************************ 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:44.871 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eQ4JtO76ah 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71010 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71010 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71010 ']' 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.872 15:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.872 [2024-12-06 15:40:28.094331] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:13:44.872 [2024-12-06 15:40:28.094494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71010 ] 00:13:45.132 [2024-12-06 15:40:28.287551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.390 [2024-12-06 15:40:28.446737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.649 [2024-12-06 15:40:28.709919] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.649 [2024-12-06 15:40:28.709992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.910 15:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.910 15:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:45.910 15:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:45.910 15:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:45.910 15:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.910 15:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.910 BaseBdev1_malloc 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.910 true 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.910 [2024-12-06 15:40:29.062210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:45.910 [2024-12-06 15:40:29.062285] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.910 [2024-12-06 15:40:29.062312] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:45.910 [2024-12-06 15:40:29.062329] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.910 [2024-12-06 15:40:29.065291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.910 [2024-12-06 15:40:29.065342] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:45.910 BaseBdev1 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.910 BaseBdev2_malloc 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.910 true 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.910 [2024-12-06 15:40:29.141481] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:45.910 [2024-12-06 15:40:29.141566] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.910 [2024-12-06 15:40:29.141589] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:45.910 [2024-12-06 15:40:29.141606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.910 [2024-12-06 15:40:29.144579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.910 [2024-12-06 15:40:29.144625] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:45.910 BaseBdev2 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.910 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.169 BaseBdev3_malloc 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.169 true 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.169 [2024-12-06 15:40:29.232691] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:46.169 [2024-12-06 15:40:29.232900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.169 [2024-12-06 15:40:29.232970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:46.169 [2024-12-06 15:40:29.233182] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.169 [2024-12-06 15:40:29.236188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.169 [2024-12-06 15:40:29.236352] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:46.169 BaseBdev3 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.169 BaseBdev4_malloc 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.169 true 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.169 [2024-12-06 15:40:29.312010] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:46.169 [2024-12-06 15:40:29.312086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.169 [2024-12-06 15:40:29.312111] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:46.169 [2024-12-06 15:40:29.312127] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.169 [2024-12-06 15:40:29.315008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.169 [2024-12-06 15:40:29.315233] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:46.169 BaseBdev4 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.169 [2024-12-06 15:40:29.324175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.169 [2024-12-06 15:40:29.326719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.169 [2024-12-06 15:40:29.326805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:46.169 [2024-12-06 15:40:29.326888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:46.169 [2024-12-06 15:40:29.327144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:46.169 [2024-12-06 15:40:29.327168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:46.169 [2024-12-06 15:40:29.327469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:46.169 [2024-12-06 15:40:29.327690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:46.169 [2024-12-06 15:40:29.327720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:46.169 [2024-12-06 15:40:29.327904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.169 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.170 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.170 "name": "raid_bdev1", 00:13:46.170 "uuid": "9366f3f0-18a8-4366-a407-6f039bd75c10", 00:13:46.170 "strip_size_kb": 64, 00:13:46.170 "state": "online", 00:13:46.170 "raid_level": "raid0", 00:13:46.170 "superblock": true, 00:13:46.170 "num_base_bdevs": 4, 00:13:46.170 "num_base_bdevs_discovered": 4, 00:13:46.170 "num_base_bdevs_operational": 4, 00:13:46.170 "base_bdevs_list": [ 00:13:46.170 { 00:13:46.170 "name": "BaseBdev1", 00:13:46.170 "uuid": "53051864-e90f-5c6d-a9f2-9b877cc62304", 00:13:46.170 "is_configured": true, 00:13:46.170 "data_offset": 2048, 00:13:46.170 "data_size": 63488 00:13:46.170 }, 00:13:46.170 { 00:13:46.170 "name": "BaseBdev2", 00:13:46.170 "uuid": "ef1ad07c-2abf-52cd-8368-de0119095b72", 00:13:46.170 "is_configured": true, 00:13:46.170 "data_offset": 2048, 00:13:46.170 "data_size": 63488 00:13:46.170 }, 00:13:46.170 { 00:13:46.170 "name": "BaseBdev3", 00:13:46.170 "uuid": "d4507b39-66cc-55de-9547-380053ae7384", 00:13:46.170 "is_configured": true, 00:13:46.170 "data_offset": 2048, 00:13:46.170 "data_size": 63488 00:13:46.170 }, 00:13:46.170 { 00:13:46.170 "name": "BaseBdev4", 00:13:46.170 "uuid": "507d750f-cbe3-5d53-86ea-6b3e4d04ec7b", 00:13:46.170 "is_configured": true, 00:13:46.170 "data_offset": 2048, 00:13:46.170 "data_size": 63488 00:13:46.170 } 00:13:46.170 ] 00:13:46.170 }' 00:13:46.170 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.170 15:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.737 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:46.737 15:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:46.737 [2024-12-06 15:40:29.889168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.674 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.674 "name": "raid_bdev1", 00:13:47.674 "uuid": "9366f3f0-18a8-4366-a407-6f039bd75c10", 00:13:47.674 "strip_size_kb": 64, 00:13:47.674 "state": "online", 00:13:47.674 "raid_level": "raid0", 00:13:47.674 "superblock": true, 00:13:47.674 "num_base_bdevs": 4, 00:13:47.674 "num_base_bdevs_discovered": 4, 00:13:47.674 "num_base_bdevs_operational": 4, 00:13:47.674 "base_bdevs_list": [ 00:13:47.674 { 00:13:47.674 "name": "BaseBdev1", 00:13:47.674 "uuid": "53051864-e90f-5c6d-a9f2-9b877cc62304", 00:13:47.674 "is_configured": true, 00:13:47.674 "data_offset": 2048, 00:13:47.674 "data_size": 63488 00:13:47.674 }, 00:13:47.674 { 00:13:47.674 "name": "BaseBdev2", 00:13:47.674 "uuid": "ef1ad07c-2abf-52cd-8368-de0119095b72", 00:13:47.675 "is_configured": true, 00:13:47.675 "data_offset": 2048, 00:13:47.675 "data_size": 63488 00:13:47.675 }, 00:13:47.675 { 00:13:47.675 "name": "BaseBdev3", 00:13:47.675 "uuid": "d4507b39-66cc-55de-9547-380053ae7384", 00:13:47.675 "is_configured": true, 00:13:47.675 "data_offset": 2048, 00:13:47.675 "data_size": 63488 00:13:47.675 }, 00:13:47.675 { 00:13:47.675 "name": "BaseBdev4", 00:13:47.675 "uuid": "507d750f-cbe3-5d53-86ea-6b3e4d04ec7b", 00:13:47.675 "is_configured": true, 00:13:47.675 "data_offset": 2048, 00:13:47.675 "data_size": 63488 00:13:47.675 } 00:13:47.675 ] 00:13:47.675 }' 00:13:47.675 15:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.675 15:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.245 15:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:48.245 15:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.245 15:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.245 [2024-12-06 15:40:31.245384] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:48.245 [2024-12-06 15:40:31.245673] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:48.245 [2024-12-06 15:40:31.248579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.245 [2024-12-06 15:40:31.248655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.245 [2024-12-06 15:40:31.248707] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.245 [2024-12-06 15:40:31.248723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:48.245 { 00:13:48.245 "results": [ 00:13:48.245 { 00:13:48.245 "job": "raid_bdev1", 00:13:48.245 "core_mask": "0x1", 00:13:48.245 "workload": "randrw", 00:13:48.245 "percentage": 50, 00:13:48.245 "status": "finished", 00:13:48.245 "queue_depth": 1, 00:13:48.245 "io_size": 131072, 00:13:48.245 "runtime": 1.355554, 00:13:48.245 "iops": 11771.570885409214, 00:13:48.245 "mibps": 1471.4463606761517, 00:13:48.245 "io_failed": 1, 00:13:48.245 "io_timeout": 0, 00:13:48.245 "avg_latency_us": 118.58650886287347, 00:13:48.245 "min_latency_us": 27.964658634538154, 00:13:48.245 "max_latency_us": 1638.4 00:13:48.245 } 00:13:48.245 ], 00:13:48.245 "core_count": 1 00:13:48.245 } 00:13:48.245 15:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.245 15:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71010 00:13:48.245 15:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71010 ']' 00:13:48.245 15:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71010 00:13:48.245 15:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:48.245 15:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.245 15:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71010 00:13:48.245 15:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.245 killing process with pid 71010 00:13:48.245 15:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.245 15:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71010' 00:13:48.245 15:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71010 00:13:48.245 [2024-12-06 15:40:31.285613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.245 15:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71010 00:13:48.504 [2024-12-06 15:40:31.650181] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:49.881 15:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eQ4JtO76ah 00:13:49.881 15:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:49.881 15:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:49.881 15:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:13:49.881 15:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:49.881 15:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:49.881 15:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:49.881 15:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:13:49.881 00:13:49.881 real 0m5.033s 00:13:49.881 user 0m5.790s 00:13:49.881 sys 0m0.789s 00:13:49.881 ************************************ 00:13:49.881 END TEST raid_read_error_test 00:13:49.881 ************************************ 00:13:49.881 15:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.881 15:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.881 15:40:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:13:49.881 15:40:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:49.881 15:40:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.881 15:40:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.881 ************************************ 00:13:49.881 START TEST raid_write_error_test 00:13:49.881 ************************************ 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dI06uBMRX8 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71160 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71160 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71160 ']' 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.881 15:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.139 [2024-12-06 15:40:33.194728] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:13:50.139 [2024-12-06 15:40:33.194865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71160 ] 00:13:50.139 [2024-12-06 15:40:33.380139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.397 [2024-12-06 15:40:33.530265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.655 [2024-12-06 15:40:33.783009] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.655 [2024-12-06 15:40:33.783303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.913 BaseBdev1_malloc 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.913 true 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.913 [2024-12-06 15:40:34.119093] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:50.913 [2024-12-06 15:40:34.119174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.913 [2024-12-06 15:40:34.119210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:50.913 [2024-12-06 15:40:34.119232] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.913 [2024-12-06 15:40:34.122491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.913 [2024-12-06 15:40:34.122569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:50.913 BaseBdev1 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.913 BaseBdev2_malloc 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.913 true 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.913 [2024-12-06 15:40:34.199460] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:50.913 [2024-12-06 15:40:34.199678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.913 [2024-12-06 15:40:34.199712] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:50.913 [2024-12-06 15:40:34.199729] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.913 [2024-12-06 15:40:34.202565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.913 [2024-12-06 15:40:34.202609] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:50.913 BaseBdev2 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.913 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.171 BaseBdev3_malloc 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.171 true 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.171 [2024-12-06 15:40:34.287011] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:51.171 [2024-12-06 15:40:34.287082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.171 [2024-12-06 15:40:34.287105] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:51.171 [2024-12-06 15:40:34.287122] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.171 [2024-12-06 15:40:34.289884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.171 [2024-12-06 15:40:34.289929] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:51.171 BaseBdev3 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.171 BaseBdev4_malloc 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.171 true 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.171 [2024-12-06 15:40:34.362223] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:51.171 [2024-12-06 15:40:34.362285] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.171 [2024-12-06 15:40:34.362307] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:51.171 [2024-12-06 15:40:34.362322] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.171 [2024-12-06 15:40:34.365020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.171 [2024-12-06 15:40:34.365193] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:51.171 BaseBdev4 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.171 [2024-12-06 15:40:34.374308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.171 [2024-12-06 15:40:34.377095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.171 [2024-12-06 15:40:34.377187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:51.171 [2024-12-06 15:40:34.377260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:51.171 [2024-12-06 15:40:34.377522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:51.171 [2024-12-06 15:40:34.377545] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:51.171 [2024-12-06 15:40:34.377844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:51.171 [2024-12-06 15:40:34.378024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:51.171 [2024-12-06 15:40:34.378211] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:51.171 [2024-12-06 15:40:34.378444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.171 "name": "raid_bdev1", 00:13:51.171 "uuid": "f709ccb3-81a7-4170-bef7-5bd1a4356c49", 00:13:51.171 "strip_size_kb": 64, 00:13:51.171 "state": "online", 00:13:51.171 "raid_level": "raid0", 00:13:51.171 "superblock": true, 00:13:51.171 "num_base_bdevs": 4, 00:13:51.171 "num_base_bdevs_discovered": 4, 00:13:51.171 "num_base_bdevs_operational": 4, 00:13:51.171 "base_bdevs_list": [ 00:13:51.171 { 00:13:51.171 "name": "BaseBdev1", 00:13:51.171 "uuid": "e2e1a48e-047f-5601-984f-45e8c2cf5d60", 00:13:51.171 "is_configured": true, 00:13:51.171 "data_offset": 2048, 00:13:51.171 "data_size": 63488 00:13:51.171 }, 00:13:51.171 { 00:13:51.171 "name": "BaseBdev2", 00:13:51.171 "uuid": "34585c7f-afdd-57db-b475-62ed8cc9a7d7", 00:13:51.171 "is_configured": true, 00:13:51.171 "data_offset": 2048, 00:13:51.171 "data_size": 63488 00:13:51.171 }, 00:13:51.171 { 00:13:51.171 "name": "BaseBdev3", 00:13:51.171 "uuid": "7bfb630d-cecc-5858-8ed6-563b21b54843", 00:13:51.171 "is_configured": true, 00:13:51.171 "data_offset": 2048, 00:13:51.171 "data_size": 63488 00:13:51.171 }, 00:13:51.171 { 00:13:51.171 "name": "BaseBdev4", 00:13:51.171 "uuid": "e2f38402-2b27-5b6f-b4d3-ea3b5515f20e", 00:13:51.171 "is_configured": true, 00:13:51.171 "data_offset": 2048, 00:13:51.171 "data_size": 63488 00:13:51.171 } 00:13:51.171 ] 00:13:51.171 }' 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.171 15:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.739 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:51.739 15:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:51.739 [2024-12-06 15:40:34.916015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:52.673 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.674 "name": "raid_bdev1", 00:13:52.674 "uuid": "f709ccb3-81a7-4170-bef7-5bd1a4356c49", 00:13:52.674 "strip_size_kb": 64, 00:13:52.674 "state": "online", 00:13:52.674 "raid_level": "raid0", 00:13:52.674 "superblock": true, 00:13:52.674 "num_base_bdevs": 4, 00:13:52.674 "num_base_bdevs_discovered": 4, 00:13:52.674 "num_base_bdevs_operational": 4, 00:13:52.674 "base_bdevs_list": [ 00:13:52.674 { 00:13:52.674 "name": "BaseBdev1", 00:13:52.674 "uuid": "e2e1a48e-047f-5601-984f-45e8c2cf5d60", 00:13:52.674 "is_configured": true, 00:13:52.674 "data_offset": 2048, 00:13:52.674 "data_size": 63488 00:13:52.674 }, 00:13:52.674 { 00:13:52.674 "name": "BaseBdev2", 00:13:52.674 "uuid": "34585c7f-afdd-57db-b475-62ed8cc9a7d7", 00:13:52.674 "is_configured": true, 00:13:52.674 "data_offset": 2048, 00:13:52.674 "data_size": 63488 00:13:52.674 }, 00:13:52.674 { 00:13:52.674 "name": "BaseBdev3", 00:13:52.674 "uuid": "7bfb630d-cecc-5858-8ed6-563b21b54843", 00:13:52.674 "is_configured": true, 00:13:52.674 "data_offset": 2048, 00:13:52.674 "data_size": 63488 00:13:52.674 }, 00:13:52.674 { 00:13:52.674 "name": "BaseBdev4", 00:13:52.674 "uuid": "e2f38402-2b27-5b6f-b4d3-ea3b5515f20e", 00:13:52.674 "is_configured": true, 00:13:52.674 "data_offset": 2048, 00:13:52.674 "data_size": 63488 00:13:52.674 } 00:13:52.674 ] 00:13:52.674 }' 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.674 15:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.241 15:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:53.241 15:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.241 15:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.241 [2024-12-06 15:40:36.270106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.241 [2024-12-06 15:40:36.270147] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.241 [2024-12-06 15:40:36.272985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.241 [2024-12-06 15:40:36.273254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.241 [2024-12-06 15:40:36.273333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.241 [2024-12-06 15:40:36.273353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:53.241 { 00:13:53.241 "results": [ 00:13:53.241 { 00:13:53.241 "job": "raid_bdev1", 00:13:53.241 "core_mask": "0x1", 00:13:53.241 "workload": "randrw", 00:13:53.241 "percentage": 50, 00:13:53.241 "status": "finished", 00:13:53.241 "queue_depth": 1, 00:13:53.241 "io_size": 131072, 00:13:53.241 "runtime": 1.353493, 00:13:53.241 "iops": 12875.574531970244, 00:13:53.241 "mibps": 1609.4468164962805, 00:13:53.241 "io_failed": 1, 00:13:53.241 "io_timeout": 0, 00:13:53.241 "avg_latency_us": 109.19881536704541, 00:13:53.241 "min_latency_us": 27.347791164658634, 00:13:53.241 "max_latency_us": 1566.0208835341366 00:13:53.241 } 00:13:53.241 ], 00:13:53.241 "core_count": 1 00:13:53.241 } 00:13:53.241 15:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.241 15:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71160 00:13:53.241 15:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71160 ']' 00:13:53.241 15:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71160 00:13:53.241 15:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:53.241 15:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.241 15:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71160 00:13:53.241 15:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.241 15:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.241 15:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71160' 00:13:53.241 killing process with pid 71160 00:13:53.241 15:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71160 00:13:53.241 [2024-12-06 15:40:36.332242] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.241 15:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71160 00:13:53.499 [2024-12-06 15:40:36.701472] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.875 15:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dI06uBMRX8 00:13:54.875 15:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:54.875 15:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:54.875 15:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:13:54.875 15:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:54.875 ************************************ 00:13:54.875 END TEST raid_write_error_test 00:13:54.875 ************************************ 00:13:54.875 15:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:54.875 15:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:54.875 15:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:13:54.875 00:13:54.875 real 0m4.965s 00:13:54.875 user 0m5.670s 00:13:54.875 sys 0m0.786s 00:13:54.875 15:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.875 15:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.875 15:40:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:54.875 15:40:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:13:54.875 15:40:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:54.875 15:40:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.875 15:40:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.875 ************************************ 00:13:54.875 START TEST raid_state_function_test 00:13:54.875 ************************************ 00:13:54.875 15:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:13:54.875 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:54.875 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:54.875 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:54.875 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:54.875 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:54.875 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:54.876 Process raid pid: 71306 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71306 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71306' 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71306 00:13:54.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71306 ']' 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.876 15:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.134 [2024-12-06 15:40:38.228861] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:13:55.134 [2024-12-06 15:40:38.229192] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.134 [2024-12-06 15:40:38.411335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.393 [2024-12-06 15:40:38.555100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.652 [2024-12-06 15:40:38.823789] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.652 [2024-12-06 15:40:38.824003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.911 [2024-12-06 15:40:39.115344] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.911 [2024-12-06 15:40:39.115434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.911 [2024-12-06 15:40:39.115452] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.911 [2024-12-06 15:40:39.115472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.911 [2024-12-06 15:40:39.115484] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:55.911 [2024-12-06 15:40:39.115522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:55.911 [2024-12-06 15:40:39.115535] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:55.911 [2024-12-06 15:40:39.115556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.911 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.911 "name": "Existed_Raid", 00:13:55.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.911 "strip_size_kb": 64, 00:13:55.911 "state": "configuring", 00:13:55.911 "raid_level": "concat", 00:13:55.911 "superblock": false, 00:13:55.911 "num_base_bdevs": 4, 00:13:55.911 "num_base_bdevs_discovered": 0, 00:13:55.911 "num_base_bdevs_operational": 4, 00:13:55.911 "base_bdevs_list": [ 00:13:55.911 { 00:13:55.911 "name": "BaseBdev1", 00:13:55.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.911 "is_configured": false, 00:13:55.911 "data_offset": 0, 00:13:55.911 "data_size": 0 00:13:55.911 }, 00:13:55.911 { 00:13:55.911 "name": "BaseBdev2", 00:13:55.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.911 "is_configured": false, 00:13:55.911 "data_offset": 0, 00:13:55.911 "data_size": 0 00:13:55.911 }, 00:13:55.911 { 00:13:55.911 "name": "BaseBdev3", 00:13:55.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.911 "is_configured": false, 00:13:55.911 "data_offset": 0, 00:13:55.911 "data_size": 0 00:13:55.911 }, 00:13:55.911 { 00:13:55.911 "name": "BaseBdev4", 00:13:55.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.912 "is_configured": false, 00:13:55.912 "data_offset": 0, 00:13:55.912 "data_size": 0 00:13:55.912 } 00:13:55.912 ] 00:13:55.912 }' 00:13:55.912 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.912 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.480 [2024-12-06 15:40:39.542685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:56.480 [2024-12-06 15:40:39.542875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.480 [2024-12-06 15:40:39.550698] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:56.480 [2024-12-06 15:40:39.550749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:56.480 [2024-12-06 15:40:39.550762] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.480 [2024-12-06 15:40:39.550777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.480 [2024-12-06 15:40:39.550792] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:56.480 [2024-12-06 15:40:39.550806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:56.480 [2024-12-06 15:40:39.550815] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:56.480 [2024-12-06 15:40:39.550829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.480 [2024-12-06 15:40:39.608931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.480 BaseBdev1 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.480 [ 00:13:56.480 { 00:13:56.480 "name": "BaseBdev1", 00:13:56.480 "aliases": [ 00:13:56.480 "b426ff01-0bc0-4c7e-9657-0681a20becf2" 00:13:56.480 ], 00:13:56.480 "product_name": "Malloc disk", 00:13:56.480 "block_size": 512, 00:13:56.480 "num_blocks": 65536, 00:13:56.480 "uuid": "b426ff01-0bc0-4c7e-9657-0681a20becf2", 00:13:56.480 "assigned_rate_limits": { 00:13:56.480 "rw_ios_per_sec": 0, 00:13:56.480 "rw_mbytes_per_sec": 0, 00:13:56.480 "r_mbytes_per_sec": 0, 00:13:56.480 "w_mbytes_per_sec": 0 00:13:56.480 }, 00:13:56.480 "claimed": true, 00:13:56.480 "claim_type": "exclusive_write", 00:13:56.480 "zoned": false, 00:13:56.480 "supported_io_types": { 00:13:56.480 "read": true, 00:13:56.480 "write": true, 00:13:56.480 "unmap": true, 00:13:56.480 "flush": true, 00:13:56.480 "reset": true, 00:13:56.480 "nvme_admin": false, 00:13:56.480 "nvme_io": false, 00:13:56.480 "nvme_io_md": false, 00:13:56.480 "write_zeroes": true, 00:13:56.480 "zcopy": true, 00:13:56.480 "get_zone_info": false, 00:13:56.480 "zone_management": false, 00:13:56.480 "zone_append": false, 00:13:56.480 "compare": false, 00:13:56.480 "compare_and_write": false, 00:13:56.480 "abort": true, 00:13:56.480 "seek_hole": false, 00:13:56.480 "seek_data": false, 00:13:56.480 "copy": true, 00:13:56.480 "nvme_iov_md": false 00:13:56.480 }, 00:13:56.480 "memory_domains": [ 00:13:56.480 { 00:13:56.480 "dma_device_id": "system", 00:13:56.480 "dma_device_type": 1 00:13:56.480 }, 00:13:56.480 { 00:13:56.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.480 "dma_device_type": 2 00:13:56.480 } 00:13:56.480 ], 00:13:56.480 "driver_specific": {} 00:13:56.480 } 00:13:56.480 ] 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.480 "name": "Existed_Raid", 00:13:56.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.480 "strip_size_kb": 64, 00:13:56.480 "state": "configuring", 00:13:56.480 "raid_level": "concat", 00:13:56.480 "superblock": false, 00:13:56.480 "num_base_bdevs": 4, 00:13:56.480 "num_base_bdevs_discovered": 1, 00:13:56.480 "num_base_bdevs_operational": 4, 00:13:56.480 "base_bdevs_list": [ 00:13:56.480 { 00:13:56.480 "name": "BaseBdev1", 00:13:56.480 "uuid": "b426ff01-0bc0-4c7e-9657-0681a20becf2", 00:13:56.480 "is_configured": true, 00:13:56.480 "data_offset": 0, 00:13:56.480 "data_size": 65536 00:13:56.480 }, 00:13:56.480 { 00:13:56.480 "name": "BaseBdev2", 00:13:56.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.480 "is_configured": false, 00:13:56.480 "data_offset": 0, 00:13:56.480 "data_size": 0 00:13:56.480 }, 00:13:56.480 { 00:13:56.480 "name": "BaseBdev3", 00:13:56.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.480 "is_configured": false, 00:13:56.480 "data_offset": 0, 00:13:56.480 "data_size": 0 00:13:56.480 }, 00:13:56.480 { 00:13:56.480 "name": "BaseBdev4", 00:13:56.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.480 "is_configured": false, 00:13:56.480 "data_offset": 0, 00:13:56.480 "data_size": 0 00:13:56.480 } 00:13:56.480 ] 00:13:56.480 }' 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.480 15:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.049 [2024-12-06 15:40:40.068430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:57.049 [2024-12-06 15:40:40.068687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.049 [2024-12-06 15:40:40.076492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.049 [2024-12-06 15:40:40.078958] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:57.049 [2024-12-06 15:40:40.079006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:57.049 [2024-12-06 15:40:40.079019] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:57.049 [2024-12-06 15:40:40.079034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:57.049 [2024-12-06 15:40:40.079042] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:57.049 [2024-12-06 15:40:40.079054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.049 "name": "Existed_Raid", 00:13:57.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.049 "strip_size_kb": 64, 00:13:57.049 "state": "configuring", 00:13:57.049 "raid_level": "concat", 00:13:57.049 "superblock": false, 00:13:57.049 "num_base_bdevs": 4, 00:13:57.049 "num_base_bdevs_discovered": 1, 00:13:57.049 "num_base_bdevs_operational": 4, 00:13:57.049 "base_bdevs_list": [ 00:13:57.049 { 00:13:57.049 "name": "BaseBdev1", 00:13:57.049 "uuid": "b426ff01-0bc0-4c7e-9657-0681a20becf2", 00:13:57.049 "is_configured": true, 00:13:57.049 "data_offset": 0, 00:13:57.049 "data_size": 65536 00:13:57.049 }, 00:13:57.049 { 00:13:57.049 "name": "BaseBdev2", 00:13:57.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.049 "is_configured": false, 00:13:57.049 "data_offset": 0, 00:13:57.049 "data_size": 0 00:13:57.049 }, 00:13:57.049 { 00:13:57.049 "name": "BaseBdev3", 00:13:57.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.049 "is_configured": false, 00:13:57.049 "data_offset": 0, 00:13:57.049 "data_size": 0 00:13:57.049 }, 00:13:57.049 { 00:13:57.049 "name": "BaseBdev4", 00:13:57.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.049 "is_configured": false, 00:13:57.049 "data_offset": 0, 00:13:57.049 "data_size": 0 00:13:57.049 } 00:13:57.049 ] 00:13:57.049 }' 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.049 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.308 [2024-12-06 15:40:40.526419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.308 BaseBdev2 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.308 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.308 [ 00:13:57.308 { 00:13:57.308 "name": "BaseBdev2", 00:13:57.308 "aliases": [ 00:13:57.308 "703f9cf3-7b09-4abc-86d2-39c4e842cd75" 00:13:57.309 ], 00:13:57.309 "product_name": "Malloc disk", 00:13:57.309 "block_size": 512, 00:13:57.309 "num_blocks": 65536, 00:13:57.309 "uuid": "703f9cf3-7b09-4abc-86d2-39c4e842cd75", 00:13:57.309 "assigned_rate_limits": { 00:13:57.309 "rw_ios_per_sec": 0, 00:13:57.309 "rw_mbytes_per_sec": 0, 00:13:57.309 "r_mbytes_per_sec": 0, 00:13:57.309 "w_mbytes_per_sec": 0 00:13:57.309 }, 00:13:57.309 "claimed": true, 00:13:57.309 "claim_type": "exclusive_write", 00:13:57.309 "zoned": false, 00:13:57.309 "supported_io_types": { 00:13:57.309 "read": true, 00:13:57.309 "write": true, 00:13:57.309 "unmap": true, 00:13:57.309 "flush": true, 00:13:57.309 "reset": true, 00:13:57.309 "nvme_admin": false, 00:13:57.309 "nvme_io": false, 00:13:57.309 "nvme_io_md": false, 00:13:57.309 "write_zeroes": true, 00:13:57.309 "zcopy": true, 00:13:57.309 "get_zone_info": false, 00:13:57.309 "zone_management": false, 00:13:57.309 "zone_append": false, 00:13:57.309 "compare": false, 00:13:57.309 "compare_and_write": false, 00:13:57.309 "abort": true, 00:13:57.309 "seek_hole": false, 00:13:57.309 "seek_data": false, 00:13:57.309 "copy": true, 00:13:57.309 "nvme_iov_md": false 00:13:57.309 }, 00:13:57.309 "memory_domains": [ 00:13:57.309 { 00:13:57.309 "dma_device_id": "system", 00:13:57.309 "dma_device_type": 1 00:13:57.309 }, 00:13:57.309 { 00:13:57.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.309 "dma_device_type": 2 00:13:57.309 } 00:13:57.309 ], 00:13:57.309 "driver_specific": {} 00:13:57.309 } 00:13:57.309 ] 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.309 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.568 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.568 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.568 "name": "Existed_Raid", 00:13:57.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.568 "strip_size_kb": 64, 00:13:57.568 "state": "configuring", 00:13:57.568 "raid_level": "concat", 00:13:57.568 "superblock": false, 00:13:57.568 "num_base_bdevs": 4, 00:13:57.568 "num_base_bdevs_discovered": 2, 00:13:57.568 "num_base_bdevs_operational": 4, 00:13:57.568 "base_bdevs_list": [ 00:13:57.568 { 00:13:57.568 "name": "BaseBdev1", 00:13:57.568 "uuid": "b426ff01-0bc0-4c7e-9657-0681a20becf2", 00:13:57.568 "is_configured": true, 00:13:57.568 "data_offset": 0, 00:13:57.568 "data_size": 65536 00:13:57.568 }, 00:13:57.568 { 00:13:57.568 "name": "BaseBdev2", 00:13:57.568 "uuid": "703f9cf3-7b09-4abc-86d2-39c4e842cd75", 00:13:57.568 "is_configured": true, 00:13:57.568 "data_offset": 0, 00:13:57.568 "data_size": 65536 00:13:57.568 }, 00:13:57.568 { 00:13:57.568 "name": "BaseBdev3", 00:13:57.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.568 "is_configured": false, 00:13:57.568 "data_offset": 0, 00:13:57.568 "data_size": 0 00:13:57.568 }, 00:13:57.568 { 00:13:57.568 "name": "BaseBdev4", 00:13:57.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.568 "is_configured": false, 00:13:57.568 "data_offset": 0, 00:13:57.568 "data_size": 0 00:13:57.568 } 00:13:57.568 ] 00:13:57.568 }' 00:13:57.568 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.568 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.828 15:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:57.828 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.828 15:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.828 [2024-12-06 15:40:41.025322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.828 BaseBdev3 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.828 [ 00:13:57.828 { 00:13:57.828 "name": "BaseBdev3", 00:13:57.828 "aliases": [ 00:13:57.828 "93cd3c22-5ae7-486c-be7f-224634ada91f" 00:13:57.828 ], 00:13:57.828 "product_name": "Malloc disk", 00:13:57.828 "block_size": 512, 00:13:57.828 "num_blocks": 65536, 00:13:57.828 "uuid": "93cd3c22-5ae7-486c-be7f-224634ada91f", 00:13:57.828 "assigned_rate_limits": { 00:13:57.828 "rw_ios_per_sec": 0, 00:13:57.828 "rw_mbytes_per_sec": 0, 00:13:57.828 "r_mbytes_per_sec": 0, 00:13:57.828 "w_mbytes_per_sec": 0 00:13:57.828 }, 00:13:57.828 "claimed": true, 00:13:57.828 "claim_type": "exclusive_write", 00:13:57.828 "zoned": false, 00:13:57.828 "supported_io_types": { 00:13:57.828 "read": true, 00:13:57.828 "write": true, 00:13:57.828 "unmap": true, 00:13:57.828 "flush": true, 00:13:57.828 "reset": true, 00:13:57.828 "nvme_admin": false, 00:13:57.828 "nvme_io": false, 00:13:57.828 "nvme_io_md": false, 00:13:57.828 "write_zeroes": true, 00:13:57.828 "zcopy": true, 00:13:57.828 "get_zone_info": false, 00:13:57.828 "zone_management": false, 00:13:57.828 "zone_append": false, 00:13:57.828 "compare": false, 00:13:57.828 "compare_and_write": false, 00:13:57.828 "abort": true, 00:13:57.828 "seek_hole": false, 00:13:57.828 "seek_data": false, 00:13:57.828 "copy": true, 00:13:57.828 "nvme_iov_md": false 00:13:57.828 }, 00:13:57.828 "memory_domains": [ 00:13:57.828 { 00:13:57.828 "dma_device_id": "system", 00:13:57.828 "dma_device_type": 1 00:13:57.828 }, 00:13:57.828 { 00:13:57.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.828 "dma_device_type": 2 00:13:57.828 } 00:13:57.828 ], 00:13:57.828 "driver_specific": {} 00:13:57.828 } 00:13:57.828 ] 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.828 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.088 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.088 "name": "Existed_Raid", 00:13:58.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.088 "strip_size_kb": 64, 00:13:58.088 "state": "configuring", 00:13:58.088 "raid_level": "concat", 00:13:58.088 "superblock": false, 00:13:58.088 "num_base_bdevs": 4, 00:13:58.088 "num_base_bdevs_discovered": 3, 00:13:58.088 "num_base_bdevs_operational": 4, 00:13:58.088 "base_bdevs_list": [ 00:13:58.088 { 00:13:58.088 "name": "BaseBdev1", 00:13:58.088 "uuid": "b426ff01-0bc0-4c7e-9657-0681a20becf2", 00:13:58.088 "is_configured": true, 00:13:58.088 "data_offset": 0, 00:13:58.088 "data_size": 65536 00:13:58.088 }, 00:13:58.088 { 00:13:58.088 "name": "BaseBdev2", 00:13:58.088 "uuid": "703f9cf3-7b09-4abc-86d2-39c4e842cd75", 00:13:58.088 "is_configured": true, 00:13:58.088 "data_offset": 0, 00:13:58.088 "data_size": 65536 00:13:58.088 }, 00:13:58.088 { 00:13:58.088 "name": "BaseBdev3", 00:13:58.088 "uuid": "93cd3c22-5ae7-486c-be7f-224634ada91f", 00:13:58.088 "is_configured": true, 00:13:58.088 "data_offset": 0, 00:13:58.088 "data_size": 65536 00:13:58.088 }, 00:13:58.088 { 00:13:58.088 "name": "BaseBdev4", 00:13:58.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.088 "is_configured": false, 00:13:58.088 "data_offset": 0, 00:13:58.088 "data_size": 0 00:13:58.088 } 00:13:58.088 ] 00:13:58.088 }' 00:13:58.088 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.088 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.348 [2024-12-06 15:40:41.557593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:58.348 [2024-12-06 15:40:41.557659] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:58.348 [2024-12-06 15:40:41.557670] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:58.348 [2024-12-06 15:40:41.558020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:58.348 [2024-12-06 15:40:41.558220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:58.348 [2024-12-06 15:40:41.558242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:58.348 [2024-12-06 15:40:41.558559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.348 BaseBdev4 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.348 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.348 [ 00:13:58.348 { 00:13:58.348 "name": "BaseBdev4", 00:13:58.348 "aliases": [ 00:13:58.348 "22969152-b797-4f3f-a51c-6a5039527c69" 00:13:58.348 ], 00:13:58.349 "product_name": "Malloc disk", 00:13:58.349 "block_size": 512, 00:13:58.349 "num_blocks": 65536, 00:13:58.349 "uuid": "22969152-b797-4f3f-a51c-6a5039527c69", 00:13:58.349 "assigned_rate_limits": { 00:13:58.349 "rw_ios_per_sec": 0, 00:13:58.349 "rw_mbytes_per_sec": 0, 00:13:58.349 "r_mbytes_per_sec": 0, 00:13:58.349 "w_mbytes_per_sec": 0 00:13:58.349 }, 00:13:58.349 "claimed": true, 00:13:58.349 "claim_type": "exclusive_write", 00:13:58.349 "zoned": false, 00:13:58.349 "supported_io_types": { 00:13:58.349 "read": true, 00:13:58.349 "write": true, 00:13:58.349 "unmap": true, 00:13:58.349 "flush": true, 00:13:58.349 "reset": true, 00:13:58.349 "nvme_admin": false, 00:13:58.349 "nvme_io": false, 00:13:58.349 "nvme_io_md": false, 00:13:58.349 "write_zeroes": true, 00:13:58.349 "zcopy": true, 00:13:58.349 "get_zone_info": false, 00:13:58.349 "zone_management": false, 00:13:58.349 "zone_append": false, 00:13:58.349 "compare": false, 00:13:58.349 "compare_and_write": false, 00:13:58.349 "abort": true, 00:13:58.349 "seek_hole": false, 00:13:58.349 "seek_data": false, 00:13:58.349 "copy": true, 00:13:58.349 "nvme_iov_md": false 00:13:58.349 }, 00:13:58.349 "memory_domains": [ 00:13:58.349 { 00:13:58.349 "dma_device_id": "system", 00:13:58.349 "dma_device_type": 1 00:13:58.349 }, 00:13:58.349 { 00:13:58.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.349 "dma_device_type": 2 00:13:58.349 } 00:13:58.349 ], 00:13:58.349 "driver_specific": {} 00:13:58.349 } 00:13:58.349 ] 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.349 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.608 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.609 "name": "Existed_Raid", 00:13:58.609 "uuid": "a9c72bc6-cf14-40a6-9d68-67237f7c3ed6", 00:13:58.609 "strip_size_kb": 64, 00:13:58.609 "state": "online", 00:13:58.609 "raid_level": "concat", 00:13:58.609 "superblock": false, 00:13:58.609 "num_base_bdevs": 4, 00:13:58.609 "num_base_bdevs_discovered": 4, 00:13:58.609 "num_base_bdevs_operational": 4, 00:13:58.609 "base_bdevs_list": [ 00:13:58.609 { 00:13:58.609 "name": "BaseBdev1", 00:13:58.609 "uuid": "b426ff01-0bc0-4c7e-9657-0681a20becf2", 00:13:58.609 "is_configured": true, 00:13:58.609 "data_offset": 0, 00:13:58.609 "data_size": 65536 00:13:58.609 }, 00:13:58.609 { 00:13:58.609 "name": "BaseBdev2", 00:13:58.609 "uuid": "703f9cf3-7b09-4abc-86d2-39c4e842cd75", 00:13:58.609 "is_configured": true, 00:13:58.609 "data_offset": 0, 00:13:58.609 "data_size": 65536 00:13:58.609 }, 00:13:58.609 { 00:13:58.609 "name": "BaseBdev3", 00:13:58.609 "uuid": "93cd3c22-5ae7-486c-be7f-224634ada91f", 00:13:58.609 "is_configured": true, 00:13:58.609 "data_offset": 0, 00:13:58.609 "data_size": 65536 00:13:58.609 }, 00:13:58.609 { 00:13:58.609 "name": "BaseBdev4", 00:13:58.609 "uuid": "22969152-b797-4f3f-a51c-6a5039527c69", 00:13:58.609 "is_configured": true, 00:13:58.609 "data_offset": 0, 00:13:58.609 "data_size": 65536 00:13:58.609 } 00:13:58.609 ] 00:13:58.609 }' 00:13:58.609 15:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.609 15:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.868 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:58.868 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:58.868 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:58.868 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:58.868 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:58.868 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:58.868 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:58.868 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:58.868 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.868 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.868 [2024-12-06 15:40:42.057316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.868 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.868 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:58.868 "name": "Existed_Raid", 00:13:58.868 "aliases": [ 00:13:58.868 "a9c72bc6-cf14-40a6-9d68-67237f7c3ed6" 00:13:58.868 ], 00:13:58.868 "product_name": "Raid Volume", 00:13:58.868 "block_size": 512, 00:13:58.868 "num_blocks": 262144, 00:13:58.868 "uuid": "a9c72bc6-cf14-40a6-9d68-67237f7c3ed6", 00:13:58.868 "assigned_rate_limits": { 00:13:58.868 "rw_ios_per_sec": 0, 00:13:58.868 "rw_mbytes_per_sec": 0, 00:13:58.868 "r_mbytes_per_sec": 0, 00:13:58.868 "w_mbytes_per_sec": 0 00:13:58.868 }, 00:13:58.868 "claimed": false, 00:13:58.868 "zoned": false, 00:13:58.868 "supported_io_types": { 00:13:58.868 "read": true, 00:13:58.868 "write": true, 00:13:58.868 "unmap": true, 00:13:58.868 "flush": true, 00:13:58.868 "reset": true, 00:13:58.868 "nvme_admin": false, 00:13:58.868 "nvme_io": false, 00:13:58.868 "nvme_io_md": false, 00:13:58.868 "write_zeroes": true, 00:13:58.868 "zcopy": false, 00:13:58.868 "get_zone_info": false, 00:13:58.868 "zone_management": false, 00:13:58.868 "zone_append": false, 00:13:58.868 "compare": false, 00:13:58.868 "compare_and_write": false, 00:13:58.868 "abort": false, 00:13:58.868 "seek_hole": false, 00:13:58.868 "seek_data": false, 00:13:58.868 "copy": false, 00:13:58.868 "nvme_iov_md": false 00:13:58.868 }, 00:13:58.868 "memory_domains": [ 00:13:58.868 { 00:13:58.868 "dma_device_id": "system", 00:13:58.868 "dma_device_type": 1 00:13:58.868 }, 00:13:58.868 { 00:13:58.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.868 "dma_device_type": 2 00:13:58.868 }, 00:13:58.868 { 00:13:58.868 "dma_device_id": "system", 00:13:58.868 "dma_device_type": 1 00:13:58.868 }, 00:13:58.868 { 00:13:58.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.868 "dma_device_type": 2 00:13:58.868 }, 00:13:58.868 { 00:13:58.868 "dma_device_id": "system", 00:13:58.868 "dma_device_type": 1 00:13:58.868 }, 00:13:58.868 { 00:13:58.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.868 "dma_device_type": 2 00:13:58.868 }, 00:13:58.868 { 00:13:58.868 "dma_device_id": "system", 00:13:58.868 "dma_device_type": 1 00:13:58.868 }, 00:13:58.868 { 00:13:58.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.868 "dma_device_type": 2 00:13:58.868 } 00:13:58.868 ], 00:13:58.868 "driver_specific": { 00:13:58.868 "raid": { 00:13:58.868 "uuid": "a9c72bc6-cf14-40a6-9d68-67237f7c3ed6", 00:13:58.868 "strip_size_kb": 64, 00:13:58.868 "state": "online", 00:13:58.868 "raid_level": "concat", 00:13:58.869 "superblock": false, 00:13:58.869 "num_base_bdevs": 4, 00:13:58.869 "num_base_bdevs_discovered": 4, 00:13:58.869 "num_base_bdevs_operational": 4, 00:13:58.869 "base_bdevs_list": [ 00:13:58.869 { 00:13:58.869 "name": "BaseBdev1", 00:13:58.869 "uuid": "b426ff01-0bc0-4c7e-9657-0681a20becf2", 00:13:58.869 "is_configured": true, 00:13:58.869 "data_offset": 0, 00:13:58.869 "data_size": 65536 00:13:58.869 }, 00:13:58.869 { 00:13:58.869 "name": "BaseBdev2", 00:13:58.869 "uuid": "703f9cf3-7b09-4abc-86d2-39c4e842cd75", 00:13:58.869 "is_configured": true, 00:13:58.869 "data_offset": 0, 00:13:58.869 "data_size": 65536 00:13:58.869 }, 00:13:58.869 { 00:13:58.869 "name": "BaseBdev3", 00:13:58.869 "uuid": "93cd3c22-5ae7-486c-be7f-224634ada91f", 00:13:58.869 "is_configured": true, 00:13:58.869 "data_offset": 0, 00:13:58.869 "data_size": 65536 00:13:58.869 }, 00:13:58.869 { 00:13:58.869 "name": "BaseBdev4", 00:13:58.869 "uuid": "22969152-b797-4f3f-a51c-6a5039527c69", 00:13:58.869 "is_configured": true, 00:13:58.869 "data_offset": 0, 00:13:58.869 "data_size": 65536 00:13:58.869 } 00:13:58.869 ] 00:13:58.869 } 00:13:58.869 } 00:13:58.869 }' 00:13:58.869 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:58.869 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:58.869 BaseBdev2 00:13:58.869 BaseBdev3 00:13:58.869 BaseBdev4' 00:13:58.869 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.129 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.129 [2024-12-06 15:40:42.388702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:59.129 [2024-12-06 15:40:42.388741] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.129 [2024-12-06 15:40:42.388804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.389 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.390 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.390 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.390 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.390 "name": "Existed_Raid", 00:13:59.390 "uuid": "a9c72bc6-cf14-40a6-9d68-67237f7c3ed6", 00:13:59.390 "strip_size_kb": 64, 00:13:59.390 "state": "offline", 00:13:59.390 "raid_level": "concat", 00:13:59.390 "superblock": false, 00:13:59.390 "num_base_bdevs": 4, 00:13:59.390 "num_base_bdevs_discovered": 3, 00:13:59.390 "num_base_bdevs_operational": 3, 00:13:59.390 "base_bdevs_list": [ 00:13:59.390 { 00:13:59.390 "name": null, 00:13:59.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.390 "is_configured": false, 00:13:59.390 "data_offset": 0, 00:13:59.390 "data_size": 65536 00:13:59.390 }, 00:13:59.390 { 00:13:59.390 "name": "BaseBdev2", 00:13:59.390 "uuid": "703f9cf3-7b09-4abc-86d2-39c4e842cd75", 00:13:59.390 "is_configured": true, 00:13:59.390 "data_offset": 0, 00:13:59.390 "data_size": 65536 00:13:59.390 }, 00:13:59.390 { 00:13:59.390 "name": "BaseBdev3", 00:13:59.390 "uuid": "93cd3c22-5ae7-486c-be7f-224634ada91f", 00:13:59.390 "is_configured": true, 00:13:59.390 "data_offset": 0, 00:13:59.390 "data_size": 65536 00:13:59.390 }, 00:13:59.390 { 00:13:59.390 "name": "BaseBdev4", 00:13:59.390 "uuid": "22969152-b797-4f3f-a51c-6a5039527c69", 00:13:59.390 "is_configured": true, 00:13:59.390 "data_offset": 0, 00:13:59.390 "data_size": 65536 00:13:59.390 } 00:13:59.390 ] 00:13:59.390 }' 00:13:59.390 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.390 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.649 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:59.649 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:59.649 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.649 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.649 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.649 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:59.649 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.649 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:59.649 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:59.649 15:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:59.649 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.649 15:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.908 [2024-12-06 15:40:42.946343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:59.908 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.908 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:59.908 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:59.908 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.908 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.908 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:59.908 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.908 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.908 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:59.908 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:59.908 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:59.908 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.908 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.908 [2024-12-06 15:40:43.113624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.167 [2024-12-06 15:40:43.285562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:00.167 [2024-12-06 15:40:43.285743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.167 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.168 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:00.168 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:00.168 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:00.168 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:00.168 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:00.168 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:00.168 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.168 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.427 BaseBdev2 00:14:00.427 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.427 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:00.427 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:00.427 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:00.427 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:00.427 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:00.427 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:00.427 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:00.427 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.427 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.427 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.427 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:00.427 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.427 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.427 [ 00:14:00.427 { 00:14:00.427 "name": "BaseBdev2", 00:14:00.428 "aliases": [ 00:14:00.428 "771e8c13-5ca9-408e-8e7e-3554401ef42b" 00:14:00.428 ], 00:14:00.428 "product_name": "Malloc disk", 00:14:00.428 "block_size": 512, 00:14:00.428 "num_blocks": 65536, 00:14:00.428 "uuid": "771e8c13-5ca9-408e-8e7e-3554401ef42b", 00:14:00.428 "assigned_rate_limits": { 00:14:00.428 "rw_ios_per_sec": 0, 00:14:00.428 "rw_mbytes_per_sec": 0, 00:14:00.428 "r_mbytes_per_sec": 0, 00:14:00.428 "w_mbytes_per_sec": 0 00:14:00.428 }, 00:14:00.428 "claimed": false, 00:14:00.428 "zoned": false, 00:14:00.428 "supported_io_types": { 00:14:00.428 "read": true, 00:14:00.428 "write": true, 00:14:00.428 "unmap": true, 00:14:00.428 "flush": true, 00:14:00.428 "reset": true, 00:14:00.428 "nvme_admin": false, 00:14:00.428 "nvme_io": false, 00:14:00.428 "nvme_io_md": false, 00:14:00.428 "write_zeroes": true, 00:14:00.428 "zcopy": true, 00:14:00.428 "get_zone_info": false, 00:14:00.428 "zone_management": false, 00:14:00.428 "zone_append": false, 00:14:00.428 "compare": false, 00:14:00.428 "compare_and_write": false, 00:14:00.428 "abort": true, 00:14:00.428 "seek_hole": false, 00:14:00.428 "seek_data": false, 00:14:00.428 "copy": true, 00:14:00.428 "nvme_iov_md": false 00:14:00.428 }, 00:14:00.428 "memory_domains": [ 00:14:00.428 { 00:14:00.428 "dma_device_id": "system", 00:14:00.428 "dma_device_type": 1 00:14:00.428 }, 00:14:00.428 { 00:14:00.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.428 "dma_device_type": 2 00:14:00.428 } 00:14:00.428 ], 00:14:00.428 "driver_specific": {} 00:14:00.428 } 00:14:00.428 ] 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.428 BaseBdev3 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.428 [ 00:14:00.428 { 00:14:00.428 "name": "BaseBdev3", 00:14:00.428 "aliases": [ 00:14:00.428 "bada5147-9b13-436c-84d9-714d7d2bc63c" 00:14:00.428 ], 00:14:00.428 "product_name": "Malloc disk", 00:14:00.428 "block_size": 512, 00:14:00.428 "num_blocks": 65536, 00:14:00.428 "uuid": "bada5147-9b13-436c-84d9-714d7d2bc63c", 00:14:00.428 "assigned_rate_limits": { 00:14:00.428 "rw_ios_per_sec": 0, 00:14:00.428 "rw_mbytes_per_sec": 0, 00:14:00.428 "r_mbytes_per_sec": 0, 00:14:00.428 "w_mbytes_per_sec": 0 00:14:00.428 }, 00:14:00.428 "claimed": false, 00:14:00.428 "zoned": false, 00:14:00.428 "supported_io_types": { 00:14:00.428 "read": true, 00:14:00.428 "write": true, 00:14:00.428 "unmap": true, 00:14:00.428 "flush": true, 00:14:00.428 "reset": true, 00:14:00.428 "nvme_admin": false, 00:14:00.428 "nvme_io": false, 00:14:00.428 "nvme_io_md": false, 00:14:00.428 "write_zeroes": true, 00:14:00.428 "zcopy": true, 00:14:00.428 "get_zone_info": false, 00:14:00.428 "zone_management": false, 00:14:00.428 "zone_append": false, 00:14:00.428 "compare": false, 00:14:00.428 "compare_and_write": false, 00:14:00.428 "abort": true, 00:14:00.428 "seek_hole": false, 00:14:00.428 "seek_data": false, 00:14:00.428 "copy": true, 00:14:00.428 "nvme_iov_md": false 00:14:00.428 }, 00:14:00.428 "memory_domains": [ 00:14:00.428 { 00:14:00.428 "dma_device_id": "system", 00:14:00.428 "dma_device_type": 1 00:14:00.428 }, 00:14:00.428 { 00:14:00.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.428 "dma_device_type": 2 00:14:00.428 } 00:14:00.428 ], 00:14:00.428 "driver_specific": {} 00:14:00.428 } 00:14:00.428 ] 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.428 BaseBdev4 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:00.428 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.429 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.429 [ 00:14:00.429 { 00:14:00.429 "name": "BaseBdev4", 00:14:00.429 "aliases": [ 00:14:00.429 "6116a157-2cb2-4d71-8bbb-10ddb5cc7077" 00:14:00.429 ], 00:14:00.429 "product_name": "Malloc disk", 00:14:00.429 "block_size": 512, 00:14:00.429 "num_blocks": 65536, 00:14:00.429 "uuid": "6116a157-2cb2-4d71-8bbb-10ddb5cc7077", 00:14:00.429 "assigned_rate_limits": { 00:14:00.429 "rw_ios_per_sec": 0, 00:14:00.429 "rw_mbytes_per_sec": 0, 00:14:00.429 "r_mbytes_per_sec": 0, 00:14:00.429 "w_mbytes_per_sec": 0 00:14:00.429 }, 00:14:00.429 "claimed": false, 00:14:00.429 "zoned": false, 00:14:00.429 "supported_io_types": { 00:14:00.429 "read": true, 00:14:00.429 "write": true, 00:14:00.429 "unmap": true, 00:14:00.429 "flush": true, 00:14:00.429 "reset": true, 00:14:00.429 "nvme_admin": false, 00:14:00.429 "nvme_io": false, 00:14:00.429 "nvme_io_md": false, 00:14:00.429 "write_zeroes": true, 00:14:00.429 "zcopy": true, 00:14:00.429 "get_zone_info": false, 00:14:00.429 "zone_management": false, 00:14:00.429 "zone_append": false, 00:14:00.429 "compare": false, 00:14:00.429 "compare_and_write": false, 00:14:00.688 "abort": true, 00:14:00.688 "seek_hole": false, 00:14:00.688 "seek_data": false, 00:14:00.688 "copy": true, 00:14:00.688 "nvme_iov_md": false 00:14:00.688 }, 00:14:00.688 "memory_domains": [ 00:14:00.688 { 00:14:00.688 "dma_device_id": "system", 00:14:00.688 "dma_device_type": 1 00:14:00.688 }, 00:14:00.688 { 00:14:00.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.688 "dma_device_type": 2 00:14:00.688 } 00:14:00.688 ], 00:14:00.688 "driver_specific": {} 00:14:00.688 } 00:14:00.688 ] 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.688 [2024-12-06 15:40:43.729798] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:00.688 [2024-12-06 15:40:43.729966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:00.688 [2024-12-06 15:40:43.730110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.688 [2024-12-06 15:40:43.732607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:00.688 [2024-12-06 15:40:43.732704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.688 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.688 "name": "Existed_Raid", 00:14:00.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.688 "strip_size_kb": 64, 00:14:00.688 "state": "configuring", 00:14:00.688 "raid_level": "concat", 00:14:00.688 "superblock": false, 00:14:00.688 "num_base_bdevs": 4, 00:14:00.688 "num_base_bdevs_discovered": 3, 00:14:00.688 "num_base_bdevs_operational": 4, 00:14:00.688 "base_bdevs_list": [ 00:14:00.688 { 00:14:00.688 "name": "BaseBdev1", 00:14:00.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.688 "is_configured": false, 00:14:00.688 "data_offset": 0, 00:14:00.688 "data_size": 0 00:14:00.688 }, 00:14:00.688 { 00:14:00.688 "name": "BaseBdev2", 00:14:00.688 "uuid": "771e8c13-5ca9-408e-8e7e-3554401ef42b", 00:14:00.688 "is_configured": true, 00:14:00.688 "data_offset": 0, 00:14:00.688 "data_size": 65536 00:14:00.688 }, 00:14:00.688 { 00:14:00.688 "name": "BaseBdev3", 00:14:00.688 "uuid": "bada5147-9b13-436c-84d9-714d7d2bc63c", 00:14:00.688 "is_configured": true, 00:14:00.688 "data_offset": 0, 00:14:00.688 "data_size": 65536 00:14:00.688 }, 00:14:00.688 { 00:14:00.688 "name": "BaseBdev4", 00:14:00.688 "uuid": "6116a157-2cb2-4d71-8bbb-10ddb5cc7077", 00:14:00.688 "is_configured": true, 00:14:00.688 "data_offset": 0, 00:14:00.688 "data_size": 65536 00:14:00.688 } 00:14:00.689 ] 00:14:00.689 }' 00:14:00.689 15:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.689 15:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.948 [2024-12-06 15:40:44.141629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.948 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.948 "name": "Existed_Raid", 00:14:00.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.948 "strip_size_kb": 64, 00:14:00.948 "state": "configuring", 00:14:00.948 "raid_level": "concat", 00:14:00.948 "superblock": false, 00:14:00.948 "num_base_bdevs": 4, 00:14:00.948 "num_base_bdevs_discovered": 2, 00:14:00.948 "num_base_bdevs_operational": 4, 00:14:00.948 "base_bdevs_list": [ 00:14:00.948 { 00:14:00.948 "name": "BaseBdev1", 00:14:00.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.948 "is_configured": false, 00:14:00.948 "data_offset": 0, 00:14:00.948 "data_size": 0 00:14:00.948 }, 00:14:00.948 { 00:14:00.948 "name": null, 00:14:00.948 "uuid": "771e8c13-5ca9-408e-8e7e-3554401ef42b", 00:14:00.948 "is_configured": false, 00:14:00.948 "data_offset": 0, 00:14:00.948 "data_size": 65536 00:14:00.948 }, 00:14:00.948 { 00:14:00.948 "name": "BaseBdev3", 00:14:00.948 "uuid": "bada5147-9b13-436c-84d9-714d7d2bc63c", 00:14:00.948 "is_configured": true, 00:14:00.948 "data_offset": 0, 00:14:00.948 "data_size": 65536 00:14:00.948 }, 00:14:00.948 { 00:14:00.948 "name": "BaseBdev4", 00:14:00.948 "uuid": "6116a157-2cb2-4d71-8bbb-10ddb5cc7077", 00:14:00.948 "is_configured": true, 00:14:00.948 "data_offset": 0, 00:14:00.948 "data_size": 65536 00:14:00.948 } 00:14:00.948 ] 00:14:00.948 }' 00:14:00.949 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.949 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.533 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:01.533 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.533 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.533 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.533 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.533 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:01.533 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:01.533 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.533 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.533 [2024-12-06 15:40:44.687004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.533 BaseBdev1 00:14:01.533 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.533 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:01.533 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.534 [ 00:14:01.534 { 00:14:01.534 "name": "BaseBdev1", 00:14:01.534 "aliases": [ 00:14:01.534 "14c7b54a-f02f-4545-9ba6-e28db7f82234" 00:14:01.534 ], 00:14:01.534 "product_name": "Malloc disk", 00:14:01.534 "block_size": 512, 00:14:01.534 "num_blocks": 65536, 00:14:01.534 "uuid": "14c7b54a-f02f-4545-9ba6-e28db7f82234", 00:14:01.534 "assigned_rate_limits": { 00:14:01.534 "rw_ios_per_sec": 0, 00:14:01.534 "rw_mbytes_per_sec": 0, 00:14:01.534 "r_mbytes_per_sec": 0, 00:14:01.534 "w_mbytes_per_sec": 0 00:14:01.534 }, 00:14:01.534 "claimed": true, 00:14:01.534 "claim_type": "exclusive_write", 00:14:01.534 "zoned": false, 00:14:01.534 "supported_io_types": { 00:14:01.534 "read": true, 00:14:01.534 "write": true, 00:14:01.534 "unmap": true, 00:14:01.534 "flush": true, 00:14:01.534 "reset": true, 00:14:01.534 "nvme_admin": false, 00:14:01.534 "nvme_io": false, 00:14:01.534 "nvme_io_md": false, 00:14:01.534 "write_zeroes": true, 00:14:01.534 "zcopy": true, 00:14:01.534 "get_zone_info": false, 00:14:01.534 "zone_management": false, 00:14:01.534 "zone_append": false, 00:14:01.534 "compare": false, 00:14:01.534 "compare_and_write": false, 00:14:01.534 "abort": true, 00:14:01.534 "seek_hole": false, 00:14:01.534 "seek_data": false, 00:14:01.534 "copy": true, 00:14:01.534 "nvme_iov_md": false 00:14:01.534 }, 00:14:01.534 "memory_domains": [ 00:14:01.534 { 00:14:01.534 "dma_device_id": "system", 00:14:01.534 "dma_device_type": 1 00:14:01.534 }, 00:14:01.534 { 00:14:01.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.534 "dma_device_type": 2 00:14:01.534 } 00:14:01.534 ], 00:14:01.534 "driver_specific": {} 00:14:01.534 } 00:14:01.534 ] 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.534 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.534 "name": "Existed_Raid", 00:14:01.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.534 "strip_size_kb": 64, 00:14:01.534 "state": "configuring", 00:14:01.534 "raid_level": "concat", 00:14:01.534 "superblock": false, 00:14:01.534 "num_base_bdevs": 4, 00:14:01.534 "num_base_bdevs_discovered": 3, 00:14:01.534 "num_base_bdevs_operational": 4, 00:14:01.534 "base_bdevs_list": [ 00:14:01.534 { 00:14:01.534 "name": "BaseBdev1", 00:14:01.534 "uuid": "14c7b54a-f02f-4545-9ba6-e28db7f82234", 00:14:01.534 "is_configured": true, 00:14:01.534 "data_offset": 0, 00:14:01.534 "data_size": 65536 00:14:01.534 }, 00:14:01.534 { 00:14:01.534 "name": null, 00:14:01.534 "uuid": "771e8c13-5ca9-408e-8e7e-3554401ef42b", 00:14:01.534 "is_configured": false, 00:14:01.534 "data_offset": 0, 00:14:01.534 "data_size": 65536 00:14:01.534 }, 00:14:01.534 { 00:14:01.534 "name": "BaseBdev3", 00:14:01.534 "uuid": "bada5147-9b13-436c-84d9-714d7d2bc63c", 00:14:01.534 "is_configured": true, 00:14:01.534 "data_offset": 0, 00:14:01.534 "data_size": 65536 00:14:01.534 }, 00:14:01.534 { 00:14:01.534 "name": "BaseBdev4", 00:14:01.534 "uuid": "6116a157-2cb2-4d71-8bbb-10ddb5cc7077", 00:14:01.534 "is_configured": true, 00:14:01.534 "data_offset": 0, 00:14:01.535 "data_size": 65536 00:14:01.535 } 00:14:01.535 ] 00:14:01.535 }' 00:14:01.535 15:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.535 15:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.102 [2024-12-06 15:40:45.222716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.102 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.103 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.103 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.103 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.103 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.103 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.103 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.103 "name": "Existed_Raid", 00:14:02.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.103 "strip_size_kb": 64, 00:14:02.103 "state": "configuring", 00:14:02.103 "raid_level": "concat", 00:14:02.103 "superblock": false, 00:14:02.103 "num_base_bdevs": 4, 00:14:02.103 "num_base_bdevs_discovered": 2, 00:14:02.103 "num_base_bdevs_operational": 4, 00:14:02.103 "base_bdevs_list": [ 00:14:02.103 { 00:14:02.103 "name": "BaseBdev1", 00:14:02.103 "uuid": "14c7b54a-f02f-4545-9ba6-e28db7f82234", 00:14:02.103 "is_configured": true, 00:14:02.103 "data_offset": 0, 00:14:02.103 "data_size": 65536 00:14:02.103 }, 00:14:02.103 { 00:14:02.103 "name": null, 00:14:02.103 "uuid": "771e8c13-5ca9-408e-8e7e-3554401ef42b", 00:14:02.103 "is_configured": false, 00:14:02.103 "data_offset": 0, 00:14:02.103 "data_size": 65536 00:14:02.103 }, 00:14:02.103 { 00:14:02.103 "name": null, 00:14:02.103 "uuid": "bada5147-9b13-436c-84d9-714d7d2bc63c", 00:14:02.103 "is_configured": false, 00:14:02.103 "data_offset": 0, 00:14:02.103 "data_size": 65536 00:14:02.103 }, 00:14:02.103 { 00:14:02.103 "name": "BaseBdev4", 00:14:02.103 "uuid": "6116a157-2cb2-4d71-8bbb-10ddb5cc7077", 00:14:02.103 "is_configured": true, 00:14:02.103 "data_offset": 0, 00:14:02.103 "data_size": 65536 00:14:02.103 } 00:14:02.103 ] 00:14:02.103 }' 00:14:02.103 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.103 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.361 [2024-12-06 15:40:45.634665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.361 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.619 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.619 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.619 "name": "Existed_Raid", 00:14:02.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.619 "strip_size_kb": 64, 00:14:02.619 "state": "configuring", 00:14:02.619 "raid_level": "concat", 00:14:02.619 "superblock": false, 00:14:02.619 "num_base_bdevs": 4, 00:14:02.619 "num_base_bdevs_discovered": 3, 00:14:02.619 "num_base_bdevs_operational": 4, 00:14:02.619 "base_bdevs_list": [ 00:14:02.619 { 00:14:02.619 "name": "BaseBdev1", 00:14:02.619 "uuid": "14c7b54a-f02f-4545-9ba6-e28db7f82234", 00:14:02.619 "is_configured": true, 00:14:02.619 "data_offset": 0, 00:14:02.619 "data_size": 65536 00:14:02.619 }, 00:14:02.619 { 00:14:02.619 "name": null, 00:14:02.619 "uuid": "771e8c13-5ca9-408e-8e7e-3554401ef42b", 00:14:02.619 "is_configured": false, 00:14:02.619 "data_offset": 0, 00:14:02.619 "data_size": 65536 00:14:02.619 }, 00:14:02.619 { 00:14:02.619 "name": "BaseBdev3", 00:14:02.619 "uuid": "bada5147-9b13-436c-84d9-714d7d2bc63c", 00:14:02.619 "is_configured": true, 00:14:02.619 "data_offset": 0, 00:14:02.619 "data_size": 65536 00:14:02.619 }, 00:14:02.619 { 00:14:02.619 "name": "BaseBdev4", 00:14:02.619 "uuid": "6116a157-2cb2-4d71-8bbb-10ddb5cc7077", 00:14:02.619 "is_configured": true, 00:14:02.619 "data_offset": 0, 00:14:02.619 "data_size": 65536 00:14:02.619 } 00:14:02.619 ] 00:14:02.619 }' 00:14:02.619 15:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.619 15:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.878 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.878 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:02.878 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.878 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.878 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.878 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:02.878 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:02.878 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.878 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.878 [2024-12-06 15:40:46.126714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:03.136 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.136 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:03.136 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.136 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.136 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:03.136 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.136 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.136 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.136 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.136 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.137 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.137 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.137 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.137 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.137 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.137 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.137 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.137 "name": "Existed_Raid", 00:14:03.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.137 "strip_size_kb": 64, 00:14:03.137 "state": "configuring", 00:14:03.137 "raid_level": "concat", 00:14:03.137 "superblock": false, 00:14:03.137 "num_base_bdevs": 4, 00:14:03.137 "num_base_bdevs_discovered": 2, 00:14:03.137 "num_base_bdevs_operational": 4, 00:14:03.137 "base_bdevs_list": [ 00:14:03.137 { 00:14:03.137 "name": null, 00:14:03.137 "uuid": "14c7b54a-f02f-4545-9ba6-e28db7f82234", 00:14:03.137 "is_configured": false, 00:14:03.137 "data_offset": 0, 00:14:03.137 "data_size": 65536 00:14:03.137 }, 00:14:03.137 { 00:14:03.137 "name": null, 00:14:03.137 "uuid": "771e8c13-5ca9-408e-8e7e-3554401ef42b", 00:14:03.137 "is_configured": false, 00:14:03.137 "data_offset": 0, 00:14:03.137 "data_size": 65536 00:14:03.137 }, 00:14:03.137 { 00:14:03.137 "name": "BaseBdev3", 00:14:03.137 "uuid": "bada5147-9b13-436c-84d9-714d7d2bc63c", 00:14:03.137 "is_configured": true, 00:14:03.137 "data_offset": 0, 00:14:03.137 "data_size": 65536 00:14:03.137 }, 00:14:03.137 { 00:14:03.137 "name": "BaseBdev4", 00:14:03.137 "uuid": "6116a157-2cb2-4d71-8bbb-10ddb5cc7077", 00:14:03.137 "is_configured": true, 00:14:03.137 "data_offset": 0, 00:14:03.137 "data_size": 65536 00:14:03.137 } 00:14:03.137 ] 00:14:03.137 }' 00:14:03.137 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.137 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.395 [2024-12-06 15:40:46.679757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.395 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.652 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.652 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.652 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.652 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.652 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.653 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.653 "name": "Existed_Raid", 00:14:03.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.653 "strip_size_kb": 64, 00:14:03.653 "state": "configuring", 00:14:03.653 "raid_level": "concat", 00:14:03.653 "superblock": false, 00:14:03.653 "num_base_bdevs": 4, 00:14:03.653 "num_base_bdevs_discovered": 3, 00:14:03.653 "num_base_bdevs_operational": 4, 00:14:03.653 "base_bdevs_list": [ 00:14:03.653 { 00:14:03.653 "name": null, 00:14:03.653 "uuid": "14c7b54a-f02f-4545-9ba6-e28db7f82234", 00:14:03.653 "is_configured": false, 00:14:03.653 "data_offset": 0, 00:14:03.653 "data_size": 65536 00:14:03.653 }, 00:14:03.653 { 00:14:03.653 "name": "BaseBdev2", 00:14:03.653 "uuid": "771e8c13-5ca9-408e-8e7e-3554401ef42b", 00:14:03.653 "is_configured": true, 00:14:03.653 "data_offset": 0, 00:14:03.653 "data_size": 65536 00:14:03.653 }, 00:14:03.653 { 00:14:03.653 "name": "BaseBdev3", 00:14:03.653 "uuid": "bada5147-9b13-436c-84d9-714d7d2bc63c", 00:14:03.653 "is_configured": true, 00:14:03.653 "data_offset": 0, 00:14:03.653 "data_size": 65536 00:14:03.653 }, 00:14:03.653 { 00:14:03.653 "name": "BaseBdev4", 00:14:03.653 "uuid": "6116a157-2cb2-4d71-8bbb-10ddb5cc7077", 00:14:03.653 "is_configured": true, 00:14:03.653 "data_offset": 0, 00:14:03.653 "data_size": 65536 00:14:03.653 } 00:14:03.653 ] 00:14:03.653 }' 00:14:03.653 15:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.653 15:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.911 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.911 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:03.911 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.911 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.911 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.911 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:03.911 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.911 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:03.911 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.911 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.911 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.911 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 14c7b54a-f02f-4545-9ba6-e28db7f82234 00:14:03.911 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.911 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.170 [2024-12-06 15:40:47.227790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:04.170 [2024-12-06 15:40:47.227997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:04.170 [2024-12-06 15:40:47.228020] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:04.170 [2024-12-06 15:40:47.228373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:04.170 [2024-12-06 15:40:47.228564] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:04.170 [2024-12-06 15:40:47.228581] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:04.170 [2024-12-06 15:40:47.228839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.170 NewBaseBdev 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.170 [ 00:14:04.170 { 00:14:04.170 "name": "NewBaseBdev", 00:14:04.170 "aliases": [ 00:14:04.170 "14c7b54a-f02f-4545-9ba6-e28db7f82234" 00:14:04.170 ], 00:14:04.170 "product_name": "Malloc disk", 00:14:04.170 "block_size": 512, 00:14:04.170 "num_blocks": 65536, 00:14:04.170 "uuid": "14c7b54a-f02f-4545-9ba6-e28db7f82234", 00:14:04.170 "assigned_rate_limits": { 00:14:04.170 "rw_ios_per_sec": 0, 00:14:04.170 "rw_mbytes_per_sec": 0, 00:14:04.170 "r_mbytes_per_sec": 0, 00:14:04.170 "w_mbytes_per_sec": 0 00:14:04.170 }, 00:14:04.170 "claimed": true, 00:14:04.170 "claim_type": "exclusive_write", 00:14:04.170 "zoned": false, 00:14:04.170 "supported_io_types": { 00:14:04.170 "read": true, 00:14:04.170 "write": true, 00:14:04.170 "unmap": true, 00:14:04.170 "flush": true, 00:14:04.170 "reset": true, 00:14:04.170 "nvme_admin": false, 00:14:04.170 "nvme_io": false, 00:14:04.170 "nvme_io_md": false, 00:14:04.170 "write_zeroes": true, 00:14:04.170 "zcopy": true, 00:14:04.170 "get_zone_info": false, 00:14:04.170 "zone_management": false, 00:14:04.170 "zone_append": false, 00:14:04.170 "compare": false, 00:14:04.170 "compare_and_write": false, 00:14:04.170 "abort": true, 00:14:04.170 "seek_hole": false, 00:14:04.170 "seek_data": false, 00:14:04.170 "copy": true, 00:14:04.170 "nvme_iov_md": false 00:14:04.170 }, 00:14:04.170 "memory_domains": [ 00:14:04.170 { 00:14:04.170 "dma_device_id": "system", 00:14:04.170 "dma_device_type": 1 00:14:04.170 }, 00:14:04.170 { 00:14:04.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.170 "dma_device_type": 2 00:14:04.170 } 00:14:04.170 ], 00:14:04.170 "driver_specific": {} 00:14:04.170 } 00:14:04.170 ] 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.170 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.171 "name": "Existed_Raid", 00:14:04.171 "uuid": "b188e1d3-ea4b-4f8b-b33a-14f8b29bd80a", 00:14:04.171 "strip_size_kb": 64, 00:14:04.171 "state": "online", 00:14:04.171 "raid_level": "concat", 00:14:04.171 "superblock": false, 00:14:04.171 "num_base_bdevs": 4, 00:14:04.171 "num_base_bdevs_discovered": 4, 00:14:04.171 "num_base_bdevs_operational": 4, 00:14:04.171 "base_bdevs_list": [ 00:14:04.171 { 00:14:04.171 "name": "NewBaseBdev", 00:14:04.171 "uuid": "14c7b54a-f02f-4545-9ba6-e28db7f82234", 00:14:04.171 "is_configured": true, 00:14:04.171 "data_offset": 0, 00:14:04.171 "data_size": 65536 00:14:04.171 }, 00:14:04.171 { 00:14:04.171 "name": "BaseBdev2", 00:14:04.171 "uuid": "771e8c13-5ca9-408e-8e7e-3554401ef42b", 00:14:04.171 "is_configured": true, 00:14:04.171 "data_offset": 0, 00:14:04.171 "data_size": 65536 00:14:04.171 }, 00:14:04.171 { 00:14:04.171 "name": "BaseBdev3", 00:14:04.171 "uuid": "bada5147-9b13-436c-84d9-714d7d2bc63c", 00:14:04.171 "is_configured": true, 00:14:04.171 "data_offset": 0, 00:14:04.171 "data_size": 65536 00:14:04.171 }, 00:14:04.171 { 00:14:04.171 "name": "BaseBdev4", 00:14:04.171 "uuid": "6116a157-2cb2-4d71-8bbb-10ddb5cc7077", 00:14:04.171 "is_configured": true, 00:14:04.171 "data_offset": 0, 00:14:04.171 "data_size": 65536 00:14:04.171 } 00:14:04.171 ] 00:14:04.171 }' 00:14:04.171 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.171 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.429 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:04.429 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:04.429 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:04.429 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:04.429 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:04.429 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:04.429 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:04.429 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:04.429 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.429 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.429 [2024-12-06 15:40:47.675786] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.429 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.429 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:04.429 "name": "Existed_Raid", 00:14:04.429 "aliases": [ 00:14:04.429 "b188e1d3-ea4b-4f8b-b33a-14f8b29bd80a" 00:14:04.429 ], 00:14:04.429 "product_name": "Raid Volume", 00:14:04.429 "block_size": 512, 00:14:04.429 "num_blocks": 262144, 00:14:04.429 "uuid": "b188e1d3-ea4b-4f8b-b33a-14f8b29bd80a", 00:14:04.429 "assigned_rate_limits": { 00:14:04.429 "rw_ios_per_sec": 0, 00:14:04.429 "rw_mbytes_per_sec": 0, 00:14:04.429 "r_mbytes_per_sec": 0, 00:14:04.429 "w_mbytes_per_sec": 0 00:14:04.429 }, 00:14:04.429 "claimed": false, 00:14:04.429 "zoned": false, 00:14:04.429 "supported_io_types": { 00:14:04.429 "read": true, 00:14:04.429 "write": true, 00:14:04.429 "unmap": true, 00:14:04.429 "flush": true, 00:14:04.429 "reset": true, 00:14:04.429 "nvme_admin": false, 00:14:04.429 "nvme_io": false, 00:14:04.429 "nvme_io_md": false, 00:14:04.429 "write_zeroes": true, 00:14:04.429 "zcopy": false, 00:14:04.430 "get_zone_info": false, 00:14:04.430 "zone_management": false, 00:14:04.430 "zone_append": false, 00:14:04.430 "compare": false, 00:14:04.430 "compare_and_write": false, 00:14:04.430 "abort": false, 00:14:04.430 "seek_hole": false, 00:14:04.430 "seek_data": false, 00:14:04.430 "copy": false, 00:14:04.430 "nvme_iov_md": false 00:14:04.430 }, 00:14:04.430 "memory_domains": [ 00:14:04.430 { 00:14:04.430 "dma_device_id": "system", 00:14:04.430 "dma_device_type": 1 00:14:04.430 }, 00:14:04.430 { 00:14:04.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.430 "dma_device_type": 2 00:14:04.430 }, 00:14:04.430 { 00:14:04.430 "dma_device_id": "system", 00:14:04.430 "dma_device_type": 1 00:14:04.430 }, 00:14:04.430 { 00:14:04.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.430 "dma_device_type": 2 00:14:04.430 }, 00:14:04.430 { 00:14:04.430 "dma_device_id": "system", 00:14:04.430 "dma_device_type": 1 00:14:04.430 }, 00:14:04.430 { 00:14:04.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.430 "dma_device_type": 2 00:14:04.430 }, 00:14:04.430 { 00:14:04.430 "dma_device_id": "system", 00:14:04.430 "dma_device_type": 1 00:14:04.430 }, 00:14:04.430 { 00:14:04.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.430 "dma_device_type": 2 00:14:04.430 } 00:14:04.430 ], 00:14:04.430 "driver_specific": { 00:14:04.430 "raid": { 00:14:04.430 "uuid": "b188e1d3-ea4b-4f8b-b33a-14f8b29bd80a", 00:14:04.430 "strip_size_kb": 64, 00:14:04.430 "state": "online", 00:14:04.430 "raid_level": "concat", 00:14:04.430 "superblock": false, 00:14:04.430 "num_base_bdevs": 4, 00:14:04.430 "num_base_bdevs_discovered": 4, 00:14:04.430 "num_base_bdevs_operational": 4, 00:14:04.430 "base_bdevs_list": [ 00:14:04.430 { 00:14:04.430 "name": "NewBaseBdev", 00:14:04.430 "uuid": "14c7b54a-f02f-4545-9ba6-e28db7f82234", 00:14:04.430 "is_configured": true, 00:14:04.430 "data_offset": 0, 00:14:04.430 "data_size": 65536 00:14:04.430 }, 00:14:04.430 { 00:14:04.430 "name": "BaseBdev2", 00:14:04.430 "uuid": "771e8c13-5ca9-408e-8e7e-3554401ef42b", 00:14:04.430 "is_configured": true, 00:14:04.430 "data_offset": 0, 00:14:04.430 "data_size": 65536 00:14:04.430 }, 00:14:04.430 { 00:14:04.430 "name": "BaseBdev3", 00:14:04.430 "uuid": "bada5147-9b13-436c-84d9-714d7d2bc63c", 00:14:04.430 "is_configured": true, 00:14:04.430 "data_offset": 0, 00:14:04.430 "data_size": 65536 00:14:04.430 }, 00:14:04.430 { 00:14:04.430 "name": "BaseBdev4", 00:14:04.430 "uuid": "6116a157-2cb2-4d71-8bbb-10ddb5cc7077", 00:14:04.430 "is_configured": true, 00:14:04.430 "data_offset": 0, 00:14:04.430 "data_size": 65536 00:14:04.430 } 00:14:04.430 ] 00:14:04.430 } 00:14:04.430 } 00:14:04.430 }' 00:14:04.430 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:04.689 BaseBdev2 00:14:04.689 BaseBdev3 00:14:04.689 BaseBdev4' 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.689 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.948 15:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.948 [2024-12-06 15:40:48.010882] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:04.948 [2024-12-06 15:40:48.011037] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.948 [2024-12-06 15:40:48.011155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.948 [2024-12-06 15:40:48.011242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.948 [2024-12-06 15:40:48.011256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71306 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71306 ']' 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71306 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71306 00:14:04.948 killing process with pid 71306 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71306' 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71306 00:14:04.948 [2024-12-06 15:40:48.066689] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.948 15:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71306 00:14:05.516 [2024-12-06 15:40:48.506615] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:06.894 ************************************ 00:14:06.894 END TEST raid_state_function_test 00:14:06.894 ************************************ 00:14:06.894 00:14:06.894 real 0m11.655s 00:14:06.894 user 0m18.105s 00:14:06.894 sys 0m2.571s 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.894 15:40:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:14:06.894 15:40:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:06.894 15:40:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.894 15:40:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:06.894 ************************************ 00:14:06.894 START TEST raid_state_function_test_sb 00:14:06.894 ************************************ 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71978 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:06.894 Process raid pid: 71978 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71978' 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71978 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71978 ']' 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.894 15:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.894 [2024-12-06 15:40:49.966762] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:14:06.895 [2024-12-06 15:40:49.966918] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.895 [2024-12-06 15:40:50.157066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.153 [2024-12-06 15:40:50.307122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.412 [2024-12-06 15:40:50.551275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.412 [2024-12-06 15:40:50.551351] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.671 [2024-12-06 15:40:50.879739] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.671 [2024-12-06 15:40:50.879812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.671 [2024-12-06 15:40:50.879833] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:07.671 [2024-12-06 15:40:50.879848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:07.671 [2024-12-06 15:40:50.879856] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:07.671 [2024-12-06 15:40:50.879868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:07.671 [2024-12-06 15:40:50.879876] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:07.671 [2024-12-06 15:40:50.879889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.671 15:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.671 "name": "Existed_Raid", 00:14:07.671 "uuid": "c13cd457-745a-4b3b-8767-2969708aa57f", 00:14:07.671 "strip_size_kb": 64, 00:14:07.671 "state": "configuring", 00:14:07.671 "raid_level": "concat", 00:14:07.671 "superblock": true, 00:14:07.672 "num_base_bdevs": 4, 00:14:07.672 "num_base_bdevs_discovered": 0, 00:14:07.672 "num_base_bdevs_operational": 4, 00:14:07.672 "base_bdevs_list": [ 00:14:07.672 { 00:14:07.672 "name": "BaseBdev1", 00:14:07.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.672 "is_configured": false, 00:14:07.672 "data_offset": 0, 00:14:07.672 "data_size": 0 00:14:07.672 }, 00:14:07.672 { 00:14:07.672 "name": "BaseBdev2", 00:14:07.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.672 "is_configured": false, 00:14:07.672 "data_offset": 0, 00:14:07.672 "data_size": 0 00:14:07.672 }, 00:14:07.672 { 00:14:07.672 "name": "BaseBdev3", 00:14:07.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.672 "is_configured": false, 00:14:07.672 "data_offset": 0, 00:14:07.672 "data_size": 0 00:14:07.672 }, 00:14:07.672 { 00:14:07.672 "name": "BaseBdev4", 00:14:07.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.672 "is_configured": false, 00:14:07.672 "data_offset": 0, 00:14:07.672 "data_size": 0 00:14:07.672 } 00:14:07.672 ] 00:14:07.672 }' 00:14:07.672 15:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.672 15:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.239 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:08.239 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.239 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.239 [2024-12-06 15:40:51.327217] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:08.239 [2024-12-06 15:40:51.327410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:08.239 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.239 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:08.239 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.239 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.239 [2024-12-06 15:40:51.339186] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.239 [2024-12-06 15:40:51.339348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.239 [2024-12-06 15:40:51.339438] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:08.239 [2024-12-06 15:40:51.339484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:08.239 [2024-12-06 15:40:51.339536] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:08.239 [2024-12-06 15:40:51.339574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:08.239 [2024-12-06 15:40:51.339604] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:08.239 [2024-12-06 15:40:51.339693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:08.239 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.239 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:08.239 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.239 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.239 [2024-12-06 15:40:51.395222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.239 BaseBdev1 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.240 [ 00:14:08.240 { 00:14:08.240 "name": "BaseBdev1", 00:14:08.240 "aliases": [ 00:14:08.240 "e59a71c7-de27-4708-a3e1-7f35642c691c" 00:14:08.240 ], 00:14:08.240 "product_name": "Malloc disk", 00:14:08.240 "block_size": 512, 00:14:08.240 "num_blocks": 65536, 00:14:08.240 "uuid": "e59a71c7-de27-4708-a3e1-7f35642c691c", 00:14:08.240 "assigned_rate_limits": { 00:14:08.240 "rw_ios_per_sec": 0, 00:14:08.240 "rw_mbytes_per_sec": 0, 00:14:08.240 "r_mbytes_per_sec": 0, 00:14:08.240 "w_mbytes_per_sec": 0 00:14:08.240 }, 00:14:08.240 "claimed": true, 00:14:08.240 "claim_type": "exclusive_write", 00:14:08.240 "zoned": false, 00:14:08.240 "supported_io_types": { 00:14:08.240 "read": true, 00:14:08.240 "write": true, 00:14:08.240 "unmap": true, 00:14:08.240 "flush": true, 00:14:08.240 "reset": true, 00:14:08.240 "nvme_admin": false, 00:14:08.240 "nvme_io": false, 00:14:08.240 "nvme_io_md": false, 00:14:08.240 "write_zeroes": true, 00:14:08.240 "zcopy": true, 00:14:08.240 "get_zone_info": false, 00:14:08.240 "zone_management": false, 00:14:08.240 "zone_append": false, 00:14:08.240 "compare": false, 00:14:08.240 "compare_and_write": false, 00:14:08.240 "abort": true, 00:14:08.240 "seek_hole": false, 00:14:08.240 "seek_data": false, 00:14:08.240 "copy": true, 00:14:08.240 "nvme_iov_md": false 00:14:08.240 }, 00:14:08.240 "memory_domains": [ 00:14:08.240 { 00:14:08.240 "dma_device_id": "system", 00:14:08.240 "dma_device_type": 1 00:14:08.240 }, 00:14:08.240 { 00:14:08.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.240 "dma_device_type": 2 00:14:08.240 } 00:14:08.240 ], 00:14:08.240 "driver_specific": {} 00:14:08.240 } 00:14:08.240 ] 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.240 "name": "Existed_Raid", 00:14:08.240 "uuid": "b7d577c5-e65d-4610-8436-4711b550cb04", 00:14:08.240 "strip_size_kb": 64, 00:14:08.240 "state": "configuring", 00:14:08.240 "raid_level": "concat", 00:14:08.240 "superblock": true, 00:14:08.240 "num_base_bdevs": 4, 00:14:08.240 "num_base_bdevs_discovered": 1, 00:14:08.240 "num_base_bdevs_operational": 4, 00:14:08.240 "base_bdevs_list": [ 00:14:08.240 { 00:14:08.240 "name": "BaseBdev1", 00:14:08.240 "uuid": "e59a71c7-de27-4708-a3e1-7f35642c691c", 00:14:08.240 "is_configured": true, 00:14:08.240 "data_offset": 2048, 00:14:08.240 "data_size": 63488 00:14:08.240 }, 00:14:08.240 { 00:14:08.240 "name": "BaseBdev2", 00:14:08.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.240 "is_configured": false, 00:14:08.240 "data_offset": 0, 00:14:08.240 "data_size": 0 00:14:08.240 }, 00:14:08.240 { 00:14:08.240 "name": "BaseBdev3", 00:14:08.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.240 "is_configured": false, 00:14:08.240 "data_offset": 0, 00:14:08.240 "data_size": 0 00:14:08.240 }, 00:14:08.240 { 00:14:08.240 "name": "BaseBdev4", 00:14:08.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.240 "is_configured": false, 00:14:08.240 "data_offset": 0, 00:14:08.240 "data_size": 0 00:14:08.240 } 00:14:08.240 ] 00:14:08.240 }' 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.240 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.809 [2024-12-06 15:40:51.870686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:08.809 [2024-12-06 15:40:51.870890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.809 [2024-12-06 15:40:51.882753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.809 [2024-12-06 15:40:51.885146] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:08.809 [2024-12-06 15:40:51.885197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:08.809 [2024-12-06 15:40:51.885209] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:08.809 [2024-12-06 15:40:51.885224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:08.809 [2024-12-06 15:40:51.885232] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:08.809 [2024-12-06 15:40:51.885244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.809 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.810 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.810 "name": "Existed_Raid", 00:14:08.810 "uuid": "15b58854-87ae-46c5-8ead-951c19ac1293", 00:14:08.810 "strip_size_kb": 64, 00:14:08.810 "state": "configuring", 00:14:08.810 "raid_level": "concat", 00:14:08.810 "superblock": true, 00:14:08.810 "num_base_bdevs": 4, 00:14:08.810 "num_base_bdevs_discovered": 1, 00:14:08.810 "num_base_bdevs_operational": 4, 00:14:08.810 "base_bdevs_list": [ 00:14:08.810 { 00:14:08.810 "name": "BaseBdev1", 00:14:08.810 "uuid": "e59a71c7-de27-4708-a3e1-7f35642c691c", 00:14:08.810 "is_configured": true, 00:14:08.810 "data_offset": 2048, 00:14:08.810 "data_size": 63488 00:14:08.810 }, 00:14:08.810 { 00:14:08.810 "name": "BaseBdev2", 00:14:08.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.810 "is_configured": false, 00:14:08.810 "data_offset": 0, 00:14:08.810 "data_size": 0 00:14:08.810 }, 00:14:08.810 { 00:14:08.810 "name": "BaseBdev3", 00:14:08.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.810 "is_configured": false, 00:14:08.810 "data_offset": 0, 00:14:08.810 "data_size": 0 00:14:08.810 }, 00:14:08.810 { 00:14:08.810 "name": "BaseBdev4", 00:14:08.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.810 "is_configured": false, 00:14:08.810 "data_offset": 0, 00:14:08.810 "data_size": 0 00:14:08.810 } 00:14:08.810 ] 00:14:08.810 }' 00:14:08.810 15:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.810 15:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.069 [2024-12-06 15:40:52.336207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:09.069 BaseBdev2 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.069 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.329 [ 00:14:09.329 { 00:14:09.329 "name": "BaseBdev2", 00:14:09.329 "aliases": [ 00:14:09.329 "2931a4fe-ea32-43b4-a6fb-7fc4c3e3cd23" 00:14:09.329 ], 00:14:09.329 "product_name": "Malloc disk", 00:14:09.329 "block_size": 512, 00:14:09.329 "num_blocks": 65536, 00:14:09.329 "uuid": "2931a4fe-ea32-43b4-a6fb-7fc4c3e3cd23", 00:14:09.329 "assigned_rate_limits": { 00:14:09.329 "rw_ios_per_sec": 0, 00:14:09.329 "rw_mbytes_per_sec": 0, 00:14:09.329 "r_mbytes_per_sec": 0, 00:14:09.329 "w_mbytes_per_sec": 0 00:14:09.329 }, 00:14:09.329 "claimed": true, 00:14:09.329 "claim_type": "exclusive_write", 00:14:09.329 "zoned": false, 00:14:09.329 "supported_io_types": { 00:14:09.329 "read": true, 00:14:09.329 "write": true, 00:14:09.329 "unmap": true, 00:14:09.329 "flush": true, 00:14:09.329 "reset": true, 00:14:09.329 "nvme_admin": false, 00:14:09.329 "nvme_io": false, 00:14:09.329 "nvme_io_md": false, 00:14:09.329 "write_zeroes": true, 00:14:09.329 "zcopy": true, 00:14:09.329 "get_zone_info": false, 00:14:09.329 "zone_management": false, 00:14:09.329 "zone_append": false, 00:14:09.329 "compare": false, 00:14:09.329 "compare_and_write": false, 00:14:09.329 "abort": true, 00:14:09.329 "seek_hole": false, 00:14:09.329 "seek_data": false, 00:14:09.329 "copy": true, 00:14:09.329 "nvme_iov_md": false 00:14:09.329 }, 00:14:09.329 "memory_domains": [ 00:14:09.329 { 00:14:09.329 "dma_device_id": "system", 00:14:09.329 "dma_device_type": 1 00:14:09.329 }, 00:14:09.329 { 00:14:09.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.329 "dma_device_type": 2 00:14:09.329 } 00:14:09.329 ], 00:14:09.329 "driver_specific": {} 00:14:09.329 } 00:14:09.329 ] 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.329 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.329 "name": "Existed_Raid", 00:14:09.329 "uuid": "15b58854-87ae-46c5-8ead-951c19ac1293", 00:14:09.329 "strip_size_kb": 64, 00:14:09.329 "state": "configuring", 00:14:09.329 "raid_level": "concat", 00:14:09.329 "superblock": true, 00:14:09.329 "num_base_bdevs": 4, 00:14:09.329 "num_base_bdevs_discovered": 2, 00:14:09.329 "num_base_bdevs_operational": 4, 00:14:09.329 "base_bdevs_list": [ 00:14:09.329 { 00:14:09.329 "name": "BaseBdev1", 00:14:09.329 "uuid": "e59a71c7-de27-4708-a3e1-7f35642c691c", 00:14:09.329 "is_configured": true, 00:14:09.330 "data_offset": 2048, 00:14:09.330 "data_size": 63488 00:14:09.330 }, 00:14:09.330 { 00:14:09.330 "name": "BaseBdev2", 00:14:09.330 "uuid": "2931a4fe-ea32-43b4-a6fb-7fc4c3e3cd23", 00:14:09.330 "is_configured": true, 00:14:09.330 "data_offset": 2048, 00:14:09.330 "data_size": 63488 00:14:09.330 }, 00:14:09.330 { 00:14:09.330 "name": "BaseBdev3", 00:14:09.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.330 "is_configured": false, 00:14:09.330 "data_offset": 0, 00:14:09.330 "data_size": 0 00:14:09.330 }, 00:14:09.330 { 00:14:09.330 "name": "BaseBdev4", 00:14:09.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.330 "is_configured": false, 00:14:09.330 "data_offset": 0, 00:14:09.330 "data_size": 0 00:14:09.330 } 00:14:09.330 ] 00:14:09.330 }' 00:14:09.330 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.330 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.590 [2024-12-06 15:40:52.835822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:09.590 BaseBdev3 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.590 [ 00:14:09.590 { 00:14:09.590 "name": "BaseBdev3", 00:14:09.590 "aliases": [ 00:14:09.590 "79b3e25f-ab53-4bbd-9206-4fd1805461ba" 00:14:09.590 ], 00:14:09.590 "product_name": "Malloc disk", 00:14:09.590 "block_size": 512, 00:14:09.590 "num_blocks": 65536, 00:14:09.590 "uuid": "79b3e25f-ab53-4bbd-9206-4fd1805461ba", 00:14:09.590 "assigned_rate_limits": { 00:14:09.590 "rw_ios_per_sec": 0, 00:14:09.590 "rw_mbytes_per_sec": 0, 00:14:09.590 "r_mbytes_per_sec": 0, 00:14:09.590 "w_mbytes_per_sec": 0 00:14:09.590 }, 00:14:09.590 "claimed": true, 00:14:09.590 "claim_type": "exclusive_write", 00:14:09.590 "zoned": false, 00:14:09.590 "supported_io_types": { 00:14:09.590 "read": true, 00:14:09.590 "write": true, 00:14:09.590 "unmap": true, 00:14:09.590 "flush": true, 00:14:09.590 "reset": true, 00:14:09.590 "nvme_admin": false, 00:14:09.590 "nvme_io": false, 00:14:09.590 "nvme_io_md": false, 00:14:09.590 "write_zeroes": true, 00:14:09.590 "zcopy": true, 00:14:09.590 "get_zone_info": false, 00:14:09.590 "zone_management": false, 00:14:09.590 "zone_append": false, 00:14:09.590 "compare": false, 00:14:09.590 "compare_and_write": false, 00:14:09.590 "abort": true, 00:14:09.590 "seek_hole": false, 00:14:09.590 "seek_data": false, 00:14:09.590 "copy": true, 00:14:09.590 "nvme_iov_md": false 00:14:09.590 }, 00:14:09.590 "memory_domains": [ 00:14:09.590 { 00:14:09.590 "dma_device_id": "system", 00:14:09.590 "dma_device_type": 1 00:14:09.590 }, 00:14:09.590 { 00:14:09.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.590 "dma_device_type": 2 00:14:09.590 } 00:14:09.590 ], 00:14:09.590 "driver_specific": {} 00:14:09.590 } 00:14:09.590 ] 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.590 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.850 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:09.850 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.850 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.850 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.850 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.850 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.850 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.850 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.850 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.850 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.850 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.850 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.850 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.850 "name": "Existed_Raid", 00:14:09.850 "uuid": "15b58854-87ae-46c5-8ead-951c19ac1293", 00:14:09.850 "strip_size_kb": 64, 00:14:09.850 "state": "configuring", 00:14:09.850 "raid_level": "concat", 00:14:09.850 "superblock": true, 00:14:09.850 "num_base_bdevs": 4, 00:14:09.850 "num_base_bdevs_discovered": 3, 00:14:09.850 "num_base_bdevs_operational": 4, 00:14:09.850 "base_bdevs_list": [ 00:14:09.850 { 00:14:09.850 "name": "BaseBdev1", 00:14:09.850 "uuid": "e59a71c7-de27-4708-a3e1-7f35642c691c", 00:14:09.850 "is_configured": true, 00:14:09.850 "data_offset": 2048, 00:14:09.850 "data_size": 63488 00:14:09.850 }, 00:14:09.850 { 00:14:09.850 "name": "BaseBdev2", 00:14:09.850 "uuid": "2931a4fe-ea32-43b4-a6fb-7fc4c3e3cd23", 00:14:09.850 "is_configured": true, 00:14:09.850 "data_offset": 2048, 00:14:09.850 "data_size": 63488 00:14:09.850 }, 00:14:09.850 { 00:14:09.850 "name": "BaseBdev3", 00:14:09.850 "uuid": "79b3e25f-ab53-4bbd-9206-4fd1805461ba", 00:14:09.850 "is_configured": true, 00:14:09.850 "data_offset": 2048, 00:14:09.850 "data_size": 63488 00:14:09.850 }, 00:14:09.851 { 00:14:09.851 "name": "BaseBdev4", 00:14:09.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.851 "is_configured": false, 00:14:09.851 "data_offset": 0, 00:14:09.851 "data_size": 0 00:14:09.851 } 00:14:09.851 ] 00:14:09.851 }' 00:14:09.851 15:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.851 15:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.110 [2024-12-06 15:40:53.332907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:10.110 [2024-12-06 15:40:53.333230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:10.110 [2024-12-06 15:40:53.333248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:10.110 [2024-12-06 15:40:53.333622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:10.110 [2024-12-06 15:40:53.333803] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:10.110 [2024-12-06 15:40:53.333818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:10.110 [2024-12-06 15:40:53.333987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.110 BaseBdev4 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.110 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.110 [ 00:14:10.110 { 00:14:10.110 "name": "BaseBdev4", 00:14:10.110 "aliases": [ 00:14:10.110 "661091eb-5404-4907-89b0-7c8c1d16e398" 00:14:10.110 ], 00:14:10.110 "product_name": "Malloc disk", 00:14:10.110 "block_size": 512, 00:14:10.110 "num_blocks": 65536, 00:14:10.110 "uuid": "661091eb-5404-4907-89b0-7c8c1d16e398", 00:14:10.110 "assigned_rate_limits": { 00:14:10.110 "rw_ios_per_sec": 0, 00:14:10.110 "rw_mbytes_per_sec": 0, 00:14:10.111 "r_mbytes_per_sec": 0, 00:14:10.111 "w_mbytes_per_sec": 0 00:14:10.111 }, 00:14:10.111 "claimed": true, 00:14:10.111 "claim_type": "exclusive_write", 00:14:10.111 "zoned": false, 00:14:10.111 "supported_io_types": { 00:14:10.111 "read": true, 00:14:10.111 "write": true, 00:14:10.111 "unmap": true, 00:14:10.111 "flush": true, 00:14:10.111 "reset": true, 00:14:10.111 "nvme_admin": false, 00:14:10.111 "nvme_io": false, 00:14:10.111 "nvme_io_md": false, 00:14:10.111 "write_zeroes": true, 00:14:10.111 "zcopy": true, 00:14:10.111 "get_zone_info": false, 00:14:10.111 "zone_management": false, 00:14:10.111 "zone_append": false, 00:14:10.111 "compare": false, 00:14:10.111 "compare_and_write": false, 00:14:10.111 "abort": true, 00:14:10.111 "seek_hole": false, 00:14:10.111 "seek_data": false, 00:14:10.111 "copy": true, 00:14:10.111 "nvme_iov_md": false 00:14:10.111 }, 00:14:10.111 "memory_domains": [ 00:14:10.111 { 00:14:10.111 "dma_device_id": "system", 00:14:10.111 "dma_device_type": 1 00:14:10.111 }, 00:14:10.111 { 00:14:10.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.111 "dma_device_type": 2 00:14:10.111 } 00:14:10.111 ], 00:14:10.111 "driver_specific": {} 00:14:10.111 } 00:14:10.111 ] 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.111 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.370 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.370 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.370 "name": "Existed_Raid", 00:14:10.370 "uuid": "15b58854-87ae-46c5-8ead-951c19ac1293", 00:14:10.370 "strip_size_kb": 64, 00:14:10.370 "state": "online", 00:14:10.370 "raid_level": "concat", 00:14:10.370 "superblock": true, 00:14:10.370 "num_base_bdevs": 4, 00:14:10.370 "num_base_bdevs_discovered": 4, 00:14:10.370 "num_base_bdevs_operational": 4, 00:14:10.370 "base_bdevs_list": [ 00:14:10.370 { 00:14:10.370 "name": "BaseBdev1", 00:14:10.370 "uuid": "e59a71c7-de27-4708-a3e1-7f35642c691c", 00:14:10.370 "is_configured": true, 00:14:10.370 "data_offset": 2048, 00:14:10.370 "data_size": 63488 00:14:10.370 }, 00:14:10.370 { 00:14:10.370 "name": "BaseBdev2", 00:14:10.370 "uuid": "2931a4fe-ea32-43b4-a6fb-7fc4c3e3cd23", 00:14:10.370 "is_configured": true, 00:14:10.370 "data_offset": 2048, 00:14:10.370 "data_size": 63488 00:14:10.370 }, 00:14:10.370 { 00:14:10.370 "name": "BaseBdev3", 00:14:10.370 "uuid": "79b3e25f-ab53-4bbd-9206-4fd1805461ba", 00:14:10.370 "is_configured": true, 00:14:10.370 "data_offset": 2048, 00:14:10.370 "data_size": 63488 00:14:10.370 }, 00:14:10.370 { 00:14:10.370 "name": "BaseBdev4", 00:14:10.370 "uuid": "661091eb-5404-4907-89b0-7c8c1d16e398", 00:14:10.370 "is_configured": true, 00:14:10.370 "data_offset": 2048, 00:14:10.370 "data_size": 63488 00:14:10.370 } 00:14:10.370 ] 00:14:10.370 }' 00:14:10.370 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.370 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.629 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:10.629 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:10.629 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:10.629 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:10.629 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:10.629 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:10.629 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:10.629 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.629 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.629 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:10.629 [2024-12-06 15:40:53.793044] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.629 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.629 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:10.629 "name": "Existed_Raid", 00:14:10.629 "aliases": [ 00:14:10.629 "15b58854-87ae-46c5-8ead-951c19ac1293" 00:14:10.629 ], 00:14:10.629 "product_name": "Raid Volume", 00:14:10.629 "block_size": 512, 00:14:10.629 "num_blocks": 253952, 00:14:10.629 "uuid": "15b58854-87ae-46c5-8ead-951c19ac1293", 00:14:10.629 "assigned_rate_limits": { 00:14:10.629 "rw_ios_per_sec": 0, 00:14:10.629 "rw_mbytes_per_sec": 0, 00:14:10.629 "r_mbytes_per_sec": 0, 00:14:10.629 "w_mbytes_per_sec": 0 00:14:10.629 }, 00:14:10.629 "claimed": false, 00:14:10.629 "zoned": false, 00:14:10.629 "supported_io_types": { 00:14:10.629 "read": true, 00:14:10.629 "write": true, 00:14:10.629 "unmap": true, 00:14:10.629 "flush": true, 00:14:10.629 "reset": true, 00:14:10.629 "nvme_admin": false, 00:14:10.629 "nvme_io": false, 00:14:10.629 "nvme_io_md": false, 00:14:10.629 "write_zeroes": true, 00:14:10.629 "zcopy": false, 00:14:10.629 "get_zone_info": false, 00:14:10.629 "zone_management": false, 00:14:10.629 "zone_append": false, 00:14:10.629 "compare": false, 00:14:10.629 "compare_and_write": false, 00:14:10.629 "abort": false, 00:14:10.629 "seek_hole": false, 00:14:10.629 "seek_data": false, 00:14:10.629 "copy": false, 00:14:10.629 "nvme_iov_md": false 00:14:10.629 }, 00:14:10.629 "memory_domains": [ 00:14:10.629 { 00:14:10.629 "dma_device_id": "system", 00:14:10.629 "dma_device_type": 1 00:14:10.629 }, 00:14:10.629 { 00:14:10.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.629 "dma_device_type": 2 00:14:10.629 }, 00:14:10.629 { 00:14:10.629 "dma_device_id": "system", 00:14:10.629 "dma_device_type": 1 00:14:10.629 }, 00:14:10.629 { 00:14:10.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.629 "dma_device_type": 2 00:14:10.629 }, 00:14:10.629 { 00:14:10.629 "dma_device_id": "system", 00:14:10.629 "dma_device_type": 1 00:14:10.629 }, 00:14:10.629 { 00:14:10.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.629 "dma_device_type": 2 00:14:10.629 }, 00:14:10.629 { 00:14:10.629 "dma_device_id": "system", 00:14:10.629 "dma_device_type": 1 00:14:10.629 }, 00:14:10.629 { 00:14:10.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.629 "dma_device_type": 2 00:14:10.629 } 00:14:10.629 ], 00:14:10.629 "driver_specific": { 00:14:10.629 "raid": { 00:14:10.629 "uuid": "15b58854-87ae-46c5-8ead-951c19ac1293", 00:14:10.629 "strip_size_kb": 64, 00:14:10.629 "state": "online", 00:14:10.629 "raid_level": "concat", 00:14:10.629 "superblock": true, 00:14:10.629 "num_base_bdevs": 4, 00:14:10.629 "num_base_bdevs_discovered": 4, 00:14:10.629 "num_base_bdevs_operational": 4, 00:14:10.629 "base_bdevs_list": [ 00:14:10.629 { 00:14:10.629 "name": "BaseBdev1", 00:14:10.629 "uuid": "e59a71c7-de27-4708-a3e1-7f35642c691c", 00:14:10.629 "is_configured": true, 00:14:10.629 "data_offset": 2048, 00:14:10.629 "data_size": 63488 00:14:10.629 }, 00:14:10.629 { 00:14:10.629 "name": "BaseBdev2", 00:14:10.629 "uuid": "2931a4fe-ea32-43b4-a6fb-7fc4c3e3cd23", 00:14:10.629 "is_configured": true, 00:14:10.629 "data_offset": 2048, 00:14:10.629 "data_size": 63488 00:14:10.629 }, 00:14:10.629 { 00:14:10.629 "name": "BaseBdev3", 00:14:10.629 "uuid": "79b3e25f-ab53-4bbd-9206-4fd1805461ba", 00:14:10.629 "is_configured": true, 00:14:10.629 "data_offset": 2048, 00:14:10.629 "data_size": 63488 00:14:10.629 }, 00:14:10.629 { 00:14:10.629 "name": "BaseBdev4", 00:14:10.629 "uuid": "661091eb-5404-4907-89b0-7c8c1d16e398", 00:14:10.629 "is_configured": true, 00:14:10.629 "data_offset": 2048, 00:14:10.629 "data_size": 63488 00:14:10.629 } 00:14:10.629 ] 00:14:10.629 } 00:14:10.629 } 00:14:10.629 }' 00:14:10.629 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:10.629 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:10.629 BaseBdev2 00:14:10.629 BaseBdev3 00:14:10.629 BaseBdev4' 00:14:10.629 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.888 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:10.888 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:10.888 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:10.888 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.888 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.888 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.888 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.888 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:10.888 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:10.888 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:10.888 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.888 15:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:10.888 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.888 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.888 15:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.888 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:10.888 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:10.888 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.889 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.889 [2024-12-06 15:40:54.096405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:10.889 [2024-12-06 15:40:54.096565] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:10.889 [2024-12-06 15:40:54.096661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.148 "name": "Existed_Raid", 00:14:11.148 "uuid": "15b58854-87ae-46c5-8ead-951c19ac1293", 00:14:11.148 "strip_size_kb": 64, 00:14:11.148 "state": "offline", 00:14:11.148 "raid_level": "concat", 00:14:11.148 "superblock": true, 00:14:11.148 "num_base_bdevs": 4, 00:14:11.148 "num_base_bdevs_discovered": 3, 00:14:11.148 "num_base_bdevs_operational": 3, 00:14:11.148 "base_bdevs_list": [ 00:14:11.148 { 00:14:11.148 "name": null, 00:14:11.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.148 "is_configured": false, 00:14:11.148 "data_offset": 0, 00:14:11.148 "data_size": 63488 00:14:11.148 }, 00:14:11.148 { 00:14:11.148 "name": "BaseBdev2", 00:14:11.148 "uuid": "2931a4fe-ea32-43b4-a6fb-7fc4c3e3cd23", 00:14:11.148 "is_configured": true, 00:14:11.148 "data_offset": 2048, 00:14:11.148 "data_size": 63488 00:14:11.148 }, 00:14:11.148 { 00:14:11.148 "name": "BaseBdev3", 00:14:11.148 "uuid": "79b3e25f-ab53-4bbd-9206-4fd1805461ba", 00:14:11.148 "is_configured": true, 00:14:11.148 "data_offset": 2048, 00:14:11.148 "data_size": 63488 00:14:11.148 }, 00:14:11.148 { 00:14:11.148 "name": "BaseBdev4", 00:14:11.148 "uuid": "661091eb-5404-4907-89b0-7c8c1d16e398", 00:14:11.148 "is_configured": true, 00:14:11.148 "data_offset": 2048, 00:14:11.148 "data_size": 63488 00:14:11.148 } 00:14:11.148 ] 00:14:11.148 }' 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.148 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.407 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:11.407 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:11.407 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:11.407 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.407 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.407 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.407 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.407 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:11.408 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:11.408 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:11.408 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.408 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.408 [2024-12-06 15:40:54.633719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.666 [2024-12-06 15:40:54.795630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.666 15:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.666 [2024-12-06 15:40:54.956807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:11.666 [2024-12-06 15:40:54.956878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.925 BaseBdev2 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.925 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.925 [ 00:14:11.925 { 00:14:11.925 "name": "BaseBdev2", 00:14:11.925 "aliases": [ 00:14:11.925 "a71e1350-74c0-43e0-ae1a-4f84140d3485" 00:14:11.925 ], 00:14:11.925 "product_name": "Malloc disk", 00:14:11.925 "block_size": 512, 00:14:11.925 "num_blocks": 65536, 00:14:11.925 "uuid": "a71e1350-74c0-43e0-ae1a-4f84140d3485", 00:14:11.925 "assigned_rate_limits": { 00:14:11.925 "rw_ios_per_sec": 0, 00:14:11.925 "rw_mbytes_per_sec": 0, 00:14:11.925 "r_mbytes_per_sec": 0, 00:14:11.925 "w_mbytes_per_sec": 0 00:14:11.925 }, 00:14:11.925 "claimed": false, 00:14:11.925 "zoned": false, 00:14:11.925 "supported_io_types": { 00:14:11.925 "read": true, 00:14:11.925 "write": true, 00:14:11.925 "unmap": true, 00:14:11.925 "flush": true, 00:14:11.925 "reset": true, 00:14:11.925 "nvme_admin": false, 00:14:11.925 "nvme_io": false, 00:14:11.925 "nvme_io_md": false, 00:14:11.925 "write_zeroes": true, 00:14:11.925 "zcopy": true, 00:14:11.925 "get_zone_info": false, 00:14:11.925 "zone_management": false, 00:14:11.925 "zone_append": false, 00:14:11.925 "compare": false, 00:14:11.925 "compare_and_write": false, 00:14:11.925 "abort": true, 00:14:11.925 "seek_hole": false, 00:14:11.925 "seek_data": false, 00:14:11.925 "copy": true, 00:14:11.925 "nvme_iov_md": false 00:14:11.925 }, 00:14:11.925 "memory_domains": [ 00:14:11.925 { 00:14:11.925 "dma_device_id": "system", 00:14:11.925 "dma_device_type": 1 00:14:11.925 }, 00:14:11.925 { 00:14:12.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.184 "dma_device_type": 2 00:14:12.184 } 00:14:12.184 ], 00:14:12.184 "driver_specific": {} 00:14:12.184 } 00:14:12.184 ] 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.184 BaseBdev3 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.184 [ 00:14:12.184 { 00:14:12.184 "name": "BaseBdev3", 00:14:12.184 "aliases": [ 00:14:12.184 "f4751ea9-ed69-4697-9d44-9db09513d988" 00:14:12.184 ], 00:14:12.184 "product_name": "Malloc disk", 00:14:12.184 "block_size": 512, 00:14:12.184 "num_blocks": 65536, 00:14:12.184 "uuid": "f4751ea9-ed69-4697-9d44-9db09513d988", 00:14:12.184 "assigned_rate_limits": { 00:14:12.184 "rw_ios_per_sec": 0, 00:14:12.184 "rw_mbytes_per_sec": 0, 00:14:12.184 "r_mbytes_per_sec": 0, 00:14:12.184 "w_mbytes_per_sec": 0 00:14:12.184 }, 00:14:12.184 "claimed": false, 00:14:12.184 "zoned": false, 00:14:12.184 "supported_io_types": { 00:14:12.184 "read": true, 00:14:12.184 "write": true, 00:14:12.184 "unmap": true, 00:14:12.184 "flush": true, 00:14:12.184 "reset": true, 00:14:12.184 "nvme_admin": false, 00:14:12.184 "nvme_io": false, 00:14:12.184 "nvme_io_md": false, 00:14:12.184 "write_zeroes": true, 00:14:12.184 "zcopy": true, 00:14:12.184 "get_zone_info": false, 00:14:12.184 "zone_management": false, 00:14:12.184 "zone_append": false, 00:14:12.184 "compare": false, 00:14:12.184 "compare_and_write": false, 00:14:12.184 "abort": true, 00:14:12.184 "seek_hole": false, 00:14:12.184 "seek_data": false, 00:14:12.184 "copy": true, 00:14:12.184 "nvme_iov_md": false 00:14:12.184 }, 00:14:12.184 "memory_domains": [ 00:14:12.184 { 00:14:12.184 "dma_device_id": "system", 00:14:12.184 "dma_device_type": 1 00:14:12.184 }, 00:14:12.184 { 00:14:12.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.184 "dma_device_type": 2 00:14:12.184 } 00:14:12.184 ], 00:14:12.184 "driver_specific": {} 00:14:12.184 } 00:14:12.184 ] 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.184 BaseBdev4 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.184 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.185 [ 00:14:12.185 { 00:14:12.185 "name": "BaseBdev4", 00:14:12.185 "aliases": [ 00:14:12.185 "a334c920-47ab-47b3-9054-5ce924e3faaf" 00:14:12.185 ], 00:14:12.185 "product_name": "Malloc disk", 00:14:12.185 "block_size": 512, 00:14:12.185 "num_blocks": 65536, 00:14:12.185 "uuid": "a334c920-47ab-47b3-9054-5ce924e3faaf", 00:14:12.185 "assigned_rate_limits": { 00:14:12.185 "rw_ios_per_sec": 0, 00:14:12.185 "rw_mbytes_per_sec": 0, 00:14:12.185 "r_mbytes_per_sec": 0, 00:14:12.185 "w_mbytes_per_sec": 0 00:14:12.185 }, 00:14:12.185 "claimed": false, 00:14:12.185 "zoned": false, 00:14:12.185 "supported_io_types": { 00:14:12.185 "read": true, 00:14:12.185 "write": true, 00:14:12.185 "unmap": true, 00:14:12.185 "flush": true, 00:14:12.185 "reset": true, 00:14:12.185 "nvme_admin": false, 00:14:12.185 "nvme_io": false, 00:14:12.185 "nvme_io_md": false, 00:14:12.185 "write_zeroes": true, 00:14:12.185 "zcopy": true, 00:14:12.185 "get_zone_info": false, 00:14:12.185 "zone_management": false, 00:14:12.185 "zone_append": false, 00:14:12.185 "compare": false, 00:14:12.185 "compare_and_write": false, 00:14:12.185 "abort": true, 00:14:12.185 "seek_hole": false, 00:14:12.185 "seek_data": false, 00:14:12.185 "copy": true, 00:14:12.185 "nvme_iov_md": false 00:14:12.185 }, 00:14:12.185 "memory_domains": [ 00:14:12.185 { 00:14:12.185 "dma_device_id": "system", 00:14:12.185 "dma_device_type": 1 00:14:12.185 }, 00:14:12.185 { 00:14:12.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.185 "dma_device_type": 2 00:14:12.185 } 00:14:12.185 ], 00:14:12.185 "driver_specific": {} 00:14:12.185 } 00:14:12.185 ] 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.185 [2024-12-06 15:40:55.399238] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:12.185 [2024-12-06 15:40:55.399297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:12.185 [2024-12-06 15:40:55.399326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.185 [2024-12-06 15:40:55.401980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.185 [2024-12-06 15:40:55.402054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.185 "name": "Existed_Raid", 00:14:12.185 "uuid": "de1d32b2-aae1-48c7-90a9-35fbd1d38128", 00:14:12.185 "strip_size_kb": 64, 00:14:12.185 "state": "configuring", 00:14:12.185 "raid_level": "concat", 00:14:12.185 "superblock": true, 00:14:12.185 "num_base_bdevs": 4, 00:14:12.185 "num_base_bdevs_discovered": 3, 00:14:12.185 "num_base_bdevs_operational": 4, 00:14:12.185 "base_bdevs_list": [ 00:14:12.185 { 00:14:12.185 "name": "BaseBdev1", 00:14:12.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.185 "is_configured": false, 00:14:12.185 "data_offset": 0, 00:14:12.185 "data_size": 0 00:14:12.185 }, 00:14:12.185 { 00:14:12.185 "name": "BaseBdev2", 00:14:12.185 "uuid": "a71e1350-74c0-43e0-ae1a-4f84140d3485", 00:14:12.185 "is_configured": true, 00:14:12.185 "data_offset": 2048, 00:14:12.185 "data_size": 63488 00:14:12.185 }, 00:14:12.185 { 00:14:12.185 "name": "BaseBdev3", 00:14:12.185 "uuid": "f4751ea9-ed69-4697-9d44-9db09513d988", 00:14:12.185 "is_configured": true, 00:14:12.185 "data_offset": 2048, 00:14:12.185 "data_size": 63488 00:14:12.185 }, 00:14:12.185 { 00:14:12.185 "name": "BaseBdev4", 00:14:12.185 "uuid": "a334c920-47ab-47b3-9054-5ce924e3faaf", 00:14:12.185 "is_configured": true, 00:14:12.185 "data_offset": 2048, 00:14:12.185 "data_size": 63488 00:14:12.185 } 00:14:12.185 ] 00:14:12.185 }' 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.185 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.751 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:12.751 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.751 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.751 [2024-12-06 15:40:55.822688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:12.751 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.751 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:12.751 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.751 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.751 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:12.751 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.751 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.751 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.751 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.751 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.751 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.751 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.752 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.752 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.752 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.752 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.752 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.752 "name": "Existed_Raid", 00:14:12.752 "uuid": "de1d32b2-aae1-48c7-90a9-35fbd1d38128", 00:14:12.752 "strip_size_kb": 64, 00:14:12.752 "state": "configuring", 00:14:12.752 "raid_level": "concat", 00:14:12.752 "superblock": true, 00:14:12.752 "num_base_bdevs": 4, 00:14:12.752 "num_base_bdevs_discovered": 2, 00:14:12.752 "num_base_bdevs_operational": 4, 00:14:12.752 "base_bdevs_list": [ 00:14:12.752 { 00:14:12.752 "name": "BaseBdev1", 00:14:12.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.752 "is_configured": false, 00:14:12.752 "data_offset": 0, 00:14:12.752 "data_size": 0 00:14:12.752 }, 00:14:12.752 { 00:14:12.752 "name": null, 00:14:12.752 "uuid": "a71e1350-74c0-43e0-ae1a-4f84140d3485", 00:14:12.752 "is_configured": false, 00:14:12.752 "data_offset": 0, 00:14:12.752 "data_size": 63488 00:14:12.752 }, 00:14:12.752 { 00:14:12.752 "name": "BaseBdev3", 00:14:12.752 "uuid": "f4751ea9-ed69-4697-9d44-9db09513d988", 00:14:12.752 "is_configured": true, 00:14:12.752 "data_offset": 2048, 00:14:12.752 "data_size": 63488 00:14:12.752 }, 00:14:12.752 { 00:14:12.752 "name": "BaseBdev4", 00:14:12.752 "uuid": "a334c920-47ab-47b3-9054-5ce924e3faaf", 00:14:12.752 "is_configured": true, 00:14:12.752 "data_offset": 2048, 00:14:12.752 "data_size": 63488 00:14:12.752 } 00:14:12.752 ] 00:14:12.752 }' 00:14:12.752 15:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.752 15:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.010 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.010 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.010 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.010 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:13.010 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.010 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:13.010 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:13.010 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.010 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.268 [2024-12-06 15:40:56.338178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.268 BaseBdev1 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.268 [ 00:14:13.268 { 00:14:13.268 "name": "BaseBdev1", 00:14:13.268 "aliases": [ 00:14:13.268 "bc692068-ee30-4903-8d6e-f10abce1f05c" 00:14:13.268 ], 00:14:13.268 "product_name": "Malloc disk", 00:14:13.268 "block_size": 512, 00:14:13.268 "num_blocks": 65536, 00:14:13.268 "uuid": "bc692068-ee30-4903-8d6e-f10abce1f05c", 00:14:13.268 "assigned_rate_limits": { 00:14:13.268 "rw_ios_per_sec": 0, 00:14:13.268 "rw_mbytes_per_sec": 0, 00:14:13.268 "r_mbytes_per_sec": 0, 00:14:13.268 "w_mbytes_per_sec": 0 00:14:13.268 }, 00:14:13.268 "claimed": true, 00:14:13.268 "claim_type": "exclusive_write", 00:14:13.268 "zoned": false, 00:14:13.268 "supported_io_types": { 00:14:13.268 "read": true, 00:14:13.268 "write": true, 00:14:13.268 "unmap": true, 00:14:13.268 "flush": true, 00:14:13.268 "reset": true, 00:14:13.268 "nvme_admin": false, 00:14:13.268 "nvme_io": false, 00:14:13.268 "nvme_io_md": false, 00:14:13.268 "write_zeroes": true, 00:14:13.268 "zcopy": true, 00:14:13.268 "get_zone_info": false, 00:14:13.268 "zone_management": false, 00:14:13.268 "zone_append": false, 00:14:13.268 "compare": false, 00:14:13.268 "compare_and_write": false, 00:14:13.268 "abort": true, 00:14:13.268 "seek_hole": false, 00:14:13.268 "seek_data": false, 00:14:13.268 "copy": true, 00:14:13.268 "nvme_iov_md": false 00:14:13.268 }, 00:14:13.268 "memory_domains": [ 00:14:13.268 { 00:14:13.268 "dma_device_id": "system", 00:14:13.268 "dma_device_type": 1 00:14:13.268 }, 00:14:13.268 { 00:14:13.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.268 "dma_device_type": 2 00:14:13.268 } 00:14:13.268 ], 00:14:13.268 "driver_specific": {} 00:14:13.268 } 00:14:13.268 ] 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.268 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.269 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.269 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.269 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.269 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.269 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.269 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.269 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.269 "name": "Existed_Raid", 00:14:13.269 "uuid": "de1d32b2-aae1-48c7-90a9-35fbd1d38128", 00:14:13.269 "strip_size_kb": 64, 00:14:13.269 "state": "configuring", 00:14:13.269 "raid_level": "concat", 00:14:13.269 "superblock": true, 00:14:13.269 "num_base_bdevs": 4, 00:14:13.269 "num_base_bdevs_discovered": 3, 00:14:13.269 "num_base_bdevs_operational": 4, 00:14:13.269 "base_bdevs_list": [ 00:14:13.269 { 00:14:13.269 "name": "BaseBdev1", 00:14:13.269 "uuid": "bc692068-ee30-4903-8d6e-f10abce1f05c", 00:14:13.269 "is_configured": true, 00:14:13.269 "data_offset": 2048, 00:14:13.269 "data_size": 63488 00:14:13.269 }, 00:14:13.269 { 00:14:13.269 "name": null, 00:14:13.269 "uuid": "a71e1350-74c0-43e0-ae1a-4f84140d3485", 00:14:13.269 "is_configured": false, 00:14:13.269 "data_offset": 0, 00:14:13.269 "data_size": 63488 00:14:13.269 }, 00:14:13.269 { 00:14:13.269 "name": "BaseBdev3", 00:14:13.269 "uuid": "f4751ea9-ed69-4697-9d44-9db09513d988", 00:14:13.269 "is_configured": true, 00:14:13.269 "data_offset": 2048, 00:14:13.269 "data_size": 63488 00:14:13.269 }, 00:14:13.269 { 00:14:13.269 "name": "BaseBdev4", 00:14:13.269 "uuid": "a334c920-47ab-47b3-9054-5ce924e3faaf", 00:14:13.269 "is_configured": true, 00:14:13.269 "data_offset": 2048, 00:14:13.269 "data_size": 63488 00:14:13.269 } 00:14:13.269 ] 00:14:13.269 }' 00:14:13.269 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.269 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.833 [2024-12-06 15:40:56.881617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.833 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.833 "name": "Existed_Raid", 00:14:13.833 "uuid": "de1d32b2-aae1-48c7-90a9-35fbd1d38128", 00:14:13.833 "strip_size_kb": 64, 00:14:13.833 "state": "configuring", 00:14:13.833 "raid_level": "concat", 00:14:13.833 "superblock": true, 00:14:13.833 "num_base_bdevs": 4, 00:14:13.833 "num_base_bdevs_discovered": 2, 00:14:13.833 "num_base_bdevs_operational": 4, 00:14:13.833 "base_bdevs_list": [ 00:14:13.833 { 00:14:13.833 "name": "BaseBdev1", 00:14:13.833 "uuid": "bc692068-ee30-4903-8d6e-f10abce1f05c", 00:14:13.833 "is_configured": true, 00:14:13.833 "data_offset": 2048, 00:14:13.833 "data_size": 63488 00:14:13.833 }, 00:14:13.833 { 00:14:13.833 "name": null, 00:14:13.833 "uuid": "a71e1350-74c0-43e0-ae1a-4f84140d3485", 00:14:13.833 "is_configured": false, 00:14:13.833 "data_offset": 0, 00:14:13.833 "data_size": 63488 00:14:13.833 }, 00:14:13.833 { 00:14:13.833 "name": null, 00:14:13.833 "uuid": "f4751ea9-ed69-4697-9d44-9db09513d988", 00:14:13.833 "is_configured": false, 00:14:13.833 "data_offset": 0, 00:14:13.833 "data_size": 63488 00:14:13.833 }, 00:14:13.833 { 00:14:13.833 "name": "BaseBdev4", 00:14:13.833 "uuid": "a334c920-47ab-47b3-9054-5ce924e3faaf", 00:14:13.833 "is_configured": true, 00:14:13.833 "data_offset": 2048, 00:14:13.833 "data_size": 63488 00:14:13.833 } 00:14:13.833 ] 00:14:13.834 }' 00:14:13.834 15:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.834 15:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.093 [2024-12-06 15:40:57.372891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.093 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.350 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.350 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.350 "name": "Existed_Raid", 00:14:14.350 "uuid": "de1d32b2-aae1-48c7-90a9-35fbd1d38128", 00:14:14.350 "strip_size_kb": 64, 00:14:14.350 "state": "configuring", 00:14:14.350 "raid_level": "concat", 00:14:14.350 "superblock": true, 00:14:14.350 "num_base_bdevs": 4, 00:14:14.350 "num_base_bdevs_discovered": 3, 00:14:14.350 "num_base_bdevs_operational": 4, 00:14:14.350 "base_bdevs_list": [ 00:14:14.350 { 00:14:14.350 "name": "BaseBdev1", 00:14:14.350 "uuid": "bc692068-ee30-4903-8d6e-f10abce1f05c", 00:14:14.350 "is_configured": true, 00:14:14.350 "data_offset": 2048, 00:14:14.350 "data_size": 63488 00:14:14.350 }, 00:14:14.350 { 00:14:14.350 "name": null, 00:14:14.350 "uuid": "a71e1350-74c0-43e0-ae1a-4f84140d3485", 00:14:14.350 "is_configured": false, 00:14:14.350 "data_offset": 0, 00:14:14.350 "data_size": 63488 00:14:14.350 }, 00:14:14.350 { 00:14:14.350 "name": "BaseBdev3", 00:14:14.350 "uuid": "f4751ea9-ed69-4697-9d44-9db09513d988", 00:14:14.350 "is_configured": true, 00:14:14.350 "data_offset": 2048, 00:14:14.350 "data_size": 63488 00:14:14.350 }, 00:14:14.350 { 00:14:14.350 "name": "BaseBdev4", 00:14:14.350 "uuid": "a334c920-47ab-47b3-9054-5ce924e3faaf", 00:14:14.350 "is_configured": true, 00:14:14.350 "data_offset": 2048, 00:14:14.350 "data_size": 63488 00:14:14.350 } 00:14:14.350 ] 00:14:14.350 }' 00:14:14.350 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.350 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.607 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:14.607 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.607 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.607 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.607 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.607 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:14.607 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:14.607 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.607 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.607 [2024-12-06 15:40:57.828303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.911 "name": "Existed_Raid", 00:14:14.911 "uuid": "de1d32b2-aae1-48c7-90a9-35fbd1d38128", 00:14:14.911 "strip_size_kb": 64, 00:14:14.911 "state": "configuring", 00:14:14.911 "raid_level": "concat", 00:14:14.911 "superblock": true, 00:14:14.911 "num_base_bdevs": 4, 00:14:14.911 "num_base_bdevs_discovered": 2, 00:14:14.911 "num_base_bdevs_operational": 4, 00:14:14.911 "base_bdevs_list": [ 00:14:14.911 { 00:14:14.911 "name": null, 00:14:14.911 "uuid": "bc692068-ee30-4903-8d6e-f10abce1f05c", 00:14:14.911 "is_configured": false, 00:14:14.911 "data_offset": 0, 00:14:14.911 "data_size": 63488 00:14:14.911 }, 00:14:14.911 { 00:14:14.911 "name": null, 00:14:14.911 "uuid": "a71e1350-74c0-43e0-ae1a-4f84140d3485", 00:14:14.911 "is_configured": false, 00:14:14.911 "data_offset": 0, 00:14:14.911 "data_size": 63488 00:14:14.911 }, 00:14:14.911 { 00:14:14.911 "name": "BaseBdev3", 00:14:14.911 "uuid": "f4751ea9-ed69-4697-9d44-9db09513d988", 00:14:14.911 "is_configured": true, 00:14:14.911 "data_offset": 2048, 00:14:14.911 "data_size": 63488 00:14:14.911 }, 00:14:14.911 { 00:14:14.911 "name": "BaseBdev4", 00:14:14.911 "uuid": "a334c920-47ab-47b3-9054-5ce924e3faaf", 00:14:14.911 "is_configured": true, 00:14:14.911 "data_offset": 2048, 00:14:14.911 "data_size": 63488 00:14:14.911 } 00:14:14.911 ] 00:14:14.911 }' 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.911 15:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.168 [2024-12-06 15:40:58.435285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.168 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.425 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.425 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.425 "name": "Existed_Raid", 00:14:15.425 "uuid": "de1d32b2-aae1-48c7-90a9-35fbd1d38128", 00:14:15.425 "strip_size_kb": 64, 00:14:15.425 "state": "configuring", 00:14:15.425 "raid_level": "concat", 00:14:15.425 "superblock": true, 00:14:15.425 "num_base_bdevs": 4, 00:14:15.425 "num_base_bdevs_discovered": 3, 00:14:15.425 "num_base_bdevs_operational": 4, 00:14:15.425 "base_bdevs_list": [ 00:14:15.425 { 00:14:15.425 "name": null, 00:14:15.425 "uuid": "bc692068-ee30-4903-8d6e-f10abce1f05c", 00:14:15.425 "is_configured": false, 00:14:15.425 "data_offset": 0, 00:14:15.425 "data_size": 63488 00:14:15.425 }, 00:14:15.425 { 00:14:15.425 "name": "BaseBdev2", 00:14:15.425 "uuid": "a71e1350-74c0-43e0-ae1a-4f84140d3485", 00:14:15.425 "is_configured": true, 00:14:15.425 "data_offset": 2048, 00:14:15.425 "data_size": 63488 00:14:15.425 }, 00:14:15.425 { 00:14:15.425 "name": "BaseBdev3", 00:14:15.425 "uuid": "f4751ea9-ed69-4697-9d44-9db09513d988", 00:14:15.425 "is_configured": true, 00:14:15.426 "data_offset": 2048, 00:14:15.426 "data_size": 63488 00:14:15.426 }, 00:14:15.426 { 00:14:15.426 "name": "BaseBdev4", 00:14:15.426 "uuid": "a334c920-47ab-47b3-9054-5ce924e3faaf", 00:14:15.426 "is_configured": true, 00:14:15.426 "data_offset": 2048, 00:14:15.426 "data_size": 63488 00:14:15.426 } 00:14:15.426 ] 00:14:15.426 }' 00:14:15.426 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.426 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.683 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.683 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:15.683 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.683 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.683 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.683 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:15.683 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.683 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:15.683 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.683 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.683 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.941 15:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bc692068-ee30-4903-8d6e-f10abce1f05c 00:14:15.941 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.941 15:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.941 [2024-12-06 15:40:59.030620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:15.941 [2024-12-06 15:40:59.030962] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:15.941 [2024-12-06 15:40:59.030978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:15.941 [2024-12-06 15:40:59.031304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:15.941 NewBaseBdev 00:14:15.941 [2024-12-06 15:40:59.031474] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:15.941 [2024-12-06 15:40:59.031490] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:15.941 [2024-12-06 15:40:59.031666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.941 [ 00:14:15.941 { 00:14:15.941 "name": "NewBaseBdev", 00:14:15.941 "aliases": [ 00:14:15.941 "bc692068-ee30-4903-8d6e-f10abce1f05c" 00:14:15.941 ], 00:14:15.941 "product_name": "Malloc disk", 00:14:15.941 "block_size": 512, 00:14:15.941 "num_blocks": 65536, 00:14:15.941 "uuid": "bc692068-ee30-4903-8d6e-f10abce1f05c", 00:14:15.941 "assigned_rate_limits": { 00:14:15.941 "rw_ios_per_sec": 0, 00:14:15.941 "rw_mbytes_per_sec": 0, 00:14:15.941 "r_mbytes_per_sec": 0, 00:14:15.941 "w_mbytes_per_sec": 0 00:14:15.941 }, 00:14:15.941 "claimed": true, 00:14:15.941 "claim_type": "exclusive_write", 00:14:15.941 "zoned": false, 00:14:15.941 "supported_io_types": { 00:14:15.941 "read": true, 00:14:15.941 "write": true, 00:14:15.941 "unmap": true, 00:14:15.941 "flush": true, 00:14:15.941 "reset": true, 00:14:15.941 "nvme_admin": false, 00:14:15.941 "nvme_io": false, 00:14:15.941 "nvme_io_md": false, 00:14:15.941 "write_zeroes": true, 00:14:15.941 "zcopy": true, 00:14:15.941 "get_zone_info": false, 00:14:15.941 "zone_management": false, 00:14:15.941 "zone_append": false, 00:14:15.941 "compare": false, 00:14:15.941 "compare_and_write": false, 00:14:15.941 "abort": true, 00:14:15.941 "seek_hole": false, 00:14:15.941 "seek_data": false, 00:14:15.941 "copy": true, 00:14:15.941 "nvme_iov_md": false 00:14:15.941 }, 00:14:15.941 "memory_domains": [ 00:14:15.941 { 00:14:15.941 "dma_device_id": "system", 00:14:15.941 "dma_device_type": 1 00:14:15.941 }, 00:14:15.941 { 00:14:15.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.941 "dma_device_type": 2 00:14:15.941 } 00:14:15.941 ], 00:14:15.941 "driver_specific": {} 00:14:15.941 } 00:14:15.941 ] 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.941 "name": "Existed_Raid", 00:14:15.941 "uuid": "de1d32b2-aae1-48c7-90a9-35fbd1d38128", 00:14:15.941 "strip_size_kb": 64, 00:14:15.941 "state": "online", 00:14:15.941 "raid_level": "concat", 00:14:15.941 "superblock": true, 00:14:15.941 "num_base_bdevs": 4, 00:14:15.941 "num_base_bdevs_discovered": 4, 00:14:15.941 "num_base_bdevs_operational": 4, 00:14:15.941 "base_bdevs_list": [ 00:14:15.941 { 00:14:15.941 "name": "NewBaseBdev", 00:14:15.941 "uuid": "bc692068-ee30-4903-8d6e-f10abce1f05c", 00:14:15.941 "is_configured": true, 00:14:15.941 "data_offset": 2048, 00:14:15.941 "data_size": 63488 00:14:15.941 }, 00:14:15.941 { 00:14:15.941 "name": "BaseBdev2", 00:14:15.941 "uuid": "a71e1350-74c0-43e0-ae1a-4f84140d3485", 00:14:15.941 "is_configured": true, 00:14:15.941 "data_offset": 2048, 00:14:15.941 "data_size": 63488 00:14:15.941 }, 00:14:15.941 { 00:14:15.941 "name": "BaseBdev3", 00:14:15.941 "uuid": "f4751ea9-ed69-4697-9d44-9db09513d988", 00:14:15.941 "is_configured": true, 00:14:15.941 "data_offset": 2048, 00:14:15.941 "data_size": 63488 00:14:15.941 }, 00:14:15.941 { 00:14:15.941 "name": "BaseBdev4", 00:14:15.941 "uuid": "a334c920-47ab-47b3-9054-5ce924e3faaf", 00:14:15.941 "is_configured": true, 00:14:15.941 "data_offset": 2048, 00:14:15.941 "data_size": 63488 00:14:15.941 } 00:14:15.941 ] 00:14:15.941 }' 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.941 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.199 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:16.199 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:16.199 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:16.199 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:16.199 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:16.199 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:16.199 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:16.199 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:16.199 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.199 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.199 [2024-12-06 15:40:59.478753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.457 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.457 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:16.457 "name": "Existed_Raid", 00:14:16.457 "aliases": [ 00:14:16.457 "de1d32b2-aae1-48c7-90a9-35fbd1d38128" 00:14:16.457 ], 00:14:16.457 "product_name": "Raid Volume", 00:14:16.457 "block_size": 512, 00:14:16.457 "num_blocks": 253952, 00:14:16.457 "uuid": "de1d32b2-aae1-48c7-90a9-35fbd1d38128", 00:14:16.457 "assigned_rate_limits": { 00:14:16.457 "rw_ios_per_sec": 0, 00:14:16.457 "rw_mbytes_per_sec": 0, 00:14:16.457 "r_mbytes_per_sec": 0, 00:14:16.457 "w_mbytes_per_sec": 0 00:14:16.457 }, 00:14:16.457 "claimed": false, 00:14:16.457 "zoned": false, 00:14:16.457 "supported_io_types": { 00:14:16.457 "read": true, 00:14:16.457 "write": true, 00:14:16.457 "unmap": true, 00:14:16.457 "flush": true, 00:14:16.457 "reset": true, 00:14:16.457 "nvme_admin": false, 00:14:16.457 "nvme_io": false, 00:14:16.457 "nvme_io_md": false, 00:14:16.457 "write_zeroes": true, 00:14:16.457 "zcopy": false, 00:14:16.457 "get_zone_info": false, 00:14:16.457 "zone_management": false, 00:14:16.457 "zone_append": false, 00:14:16.457 "compare": false, 00:14:16.457 "compare_and_write": false, 00:14:16.457 "abort": false, 00:14:16.457 "seek_hole": false, 00:14:16.457 "seek_data": false, 00:14:16.457 "copy": false, 00:14:16.457 "nvme_iov_md": false 00:14:16.457 }, 00:14:16.457 "memory_domains": [ 00:14:16.457 { 00:14:16.457 "dma_device_id": "system", 00:14:16.457 "dma_device_type": 1 00:14:16.457 }, 00:14:16.457 { 00:14:16.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.457 "dma_device_type": 2 00:14:16.457 }, 00:14:16.457 { 00:14:16.457 "dma_device_id": "system", 00:14:16.457 "dma_device_type": 1 00:14:16.457 }, 00:14:16.457 { 00:14:16.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.457 "dma_device_type": 2 00:14:16.457 }, 00:14:16.457 { 00:14:16.457 "dma_device_id": "system", 00:14:16.457 "dma_device_type": 1 00:14:16.457 }, 00:14:16.457 { 00:14:16.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.457 "dma_device_type": 2 00:14:16.457 }, 00:14:16.457 { 00:14:16.457 "dma_device_id": "system", 00:14:16.457 "dma_device_type": 1 00:14:16.457 }, 00:14:16.457 { 00:14:16.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.457 "dma_device_type": 2 00:14:16.457 } 00:14:16.457 ], 00:14:16.457 "driver_specific": { 00:14:16.457 "raid": { 00:14:16.457 "uuid": "de1d32b2-aae1-48c7-90a9-35fbd1d38128", 00:14:16.457 "strip_size_kb": 64, 00:14:16.457 "state": "online", 00:14:16.457 "raid_level": "concat", 00:14:16.457 "superblock": true, 00:14:16.457 "num_base_bdevs": 4, 00:14:16.457 "num_base_bdevs_discovered": 4, 00:14:16.457 "num_base_bdevs_operational": 4, 00:14:16.457 "base_bdevs_list": [ 00:14:16.457 { 00:14:16.457 "name": "NewBaseBdev", 00:14:16.457 "uuid": "bc692068-ee30-4903-8d6e-f10abce1f05c", 00:14:16.457 "is_configured": true, 00:14:16.457 "data_offset": 2048, 00:14:16.457 "data_size": 63488 00:14:16.457 }, 00:14:16.457 { 00:14:16.457 "name": "BaseBdev2", 00:14:16.457 "uuid": "a71e1350-74c0-43e0-ae1a-4f84140d3485", 00:14:16.457 "is_configured": true, 00:14:16.457 "data_offset": 2048, 00:14:16.457 "data_size": 63488 00:14:16.457 }, 00:14:16.457 { 00:14:16.457 "name": "BaseBdev3", 00:14:16.457 "uuid": "f4751ea9-ed69-4697-9d44-9db09513d988", 00:14:16.457 "is_configured": true, 00:14:16.457 "data_offset": 2048, 00:14:16.457 "data_size": 63488 00:14:16.457 }, 00:14:16.457 { 00:14:16.457 "name": "BaseBdev4", 00:14:16.457 "uuid": "a334c920-47ab-47b3-9054-5ce924e3faaf", 00:14:16.457 "is_configured": true, 00:14:16.457 "data_offset": 2048, 00:14:16.457 "data_size": 63488 00:14:16.457 } 00:14:16.457 ] 00:14:16.457 } 00:14:16.457 } 00:14:16.457 }' 00:14:16.457 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.457 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:16.457 BaseBdev2 00:14:16.457 BaseBdev3 00:14:16.457 BaseBdev4' 00:14:16.457 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.457 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:16.457 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.457 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.457 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:16.457 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.457 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.457 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.457 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.457 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.458 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.458 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:16.458 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.458 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.458 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.458 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.458 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.458 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.458 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.458 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:16.458 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.458 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.458 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.458 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.716 [2024-12-06 15:40:59.810360] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:16.716 [2024-12-06 15:40:59.810405] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.716 [2024-12-06 15:40:59.810526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.716 [2024-12-06 15:40:59.810619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.716 [2024-12-06 15:40:59.810633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71978 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71978 ']' 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71978 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71978 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.716 killing process with pid 71978 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71978' 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71978 00:14:16.716 [2024-12-06 15:40:59.865080] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.716 15:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71978 00:14:17.281 [2024-12-06 15:41:00.325511] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.656 15:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:18.656 00:14:18.656 real 0m11.747s 00:14:18.656 user 0m18.242s 00:14:18.656 sys 0m2.547s 00:14:18.656 15:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.656 15:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.656 ************************************ 00:14:18.656 END TEST raid_state_function_test_sb 00:14:18.656 ************************************ 00:14:18.656 15:41:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:14:18.656 15:41:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:18.656 15:41:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.656 15:41:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.656 ************************************ 00:14:18.656 START TEST raid_superblock_test 00:14:18.656 ************************************ 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72649 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72649 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72649 ']' 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.656 15:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.656 [2024-12-06 15:41:01.779092] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:14:18.656 [2024-12-06 15:41:01.779250] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72649 ] 00:14:18.914 [2024-12-06 15:41:01.967710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.914 [2024-12-06 15:41:02.116023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.173 [2024-12-06 15:41:02.368258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.173 [2024-12-06 15:41:02.368325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.433 malloc1 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.433 [2024-12-06 15:41:02.702114] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:19.433 [2024-12-06 15:41:02.702219] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.433 [2024-12-06 15:41:02.702248] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:19.433 [2024-12-06 15:41:02.702262] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.433 [2024-12-06 15:41:02.705188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.433 [2024-12-06 15:41:02.705236] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:19.433 pt1 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.433 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.693 malloc2 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.693 [2024-12-06 15:41:02.767487] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:19.693 [2024-12-06 15:41:02.767574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.693 [2024-12-06 15:41:02.767611] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:19.693 [2024-12-06 15:41:02.767626] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.693 [2024-12-06 15:41:02.770428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.693 [2024-12-06 15:41:02.770473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:19.693 pt2 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.693 malloc3 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.693 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.693 [2024-12-06 15:41:02.846770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:19.694 [2024-12-06 15:41:02.846843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.694 [2024-12-06 15:41:02.846871] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:19.694 [2024-12-06 15:41:02.846884] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.694 [2024-12-06 15:41:02.849581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.694 [2024-12-06 15:41:02.849622] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:19.694 pt3 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.694 malloc4 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.694 [2024-12-06 15:41:02.911396] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:19.694 [2024-12-06 15:41:02.911482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.694 [2024-12-06 15:41:02.911522] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:19.694 [2024-12-06 15:41:02.911536] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.694 [2024-12-06 15:41:02.914288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.694 [2024-12-06 15:41:02.914333] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:19.694 pt4 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.694 [2024-12-06 15:41:02.923415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:19.694 [2024-12-06 15:41:02.925916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:19.694 [2024-12-06 15:41:02.926022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:19.694 [2024-12-06 15:41:02.926071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:19.694 [2024-12-06 15:41:02.926309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:19.694 [2024-12-06 15:41:02.926326] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:19.694 [2024-12-06 15:41:02.926679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:19.694 [2024-12-06 15:41:02.926889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:19.694 [2024-12-06 15:41:02.926909] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:19.694 [2024-12-06 15:41:02.927106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.694 "name": "raid_bdev1", 00:14:19.694 "uuid": "b53987c8-0209-4967-9d7c-c11b4f46939f", 00:14:19.694 "strip_size_kb": 64, 00:14:19.694 "state": "online", 00:14:19.694 "raid_level": "concat", 00:14:19.694 "superblock": true, 00:14:19.694 "num_base_bdevs": 4, 00:14:19.694 "num_base_bdevs_discovered": 4, 00:14:19.694 "num_base_bdevs_operational": 4, 00:14:19.694 "base_bdevs_list": [ 00:14:19.694 { 00:14:19.694 "name": "pt1", 00:14:19.694 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:19.694 "is_configured": true, 00:14:19.694 "data_offset": 2048, 00:14:19.694 "data_size": 63488 00:14:19.694 }, 00:14:19.694 { 00:14:19.694 "name": "pt2", 00:14:19.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:19.694 "is_configured": true, 00:14:19.694 "data_offset": 2048, 00:14:19.694 "data_size": 63488 00:14:19.694 }, 00:14:19.694 { 00:14:19.694 "name": "pt3", 00:14:19.694 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:19.694 "is_configured": true, 00:14:19.694 "data_offset": 2048, 00:14:19.694 "data_size": 63488 00:14:19.694 }, 00:14:19.694 { 00:14:19.694 "name": "pt4", 00:14:19.694 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:19.694 "is_configured": true, 00:14:19.694 "data_offset": 2048, 00:14:19.694 "data_size": 63488 00:14:19.694 } 00:14:19.694 ] 00:14:19.694 }' 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.694 15:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.264 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:20.264 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:20.264 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:20.264 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:20.264 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:20.264 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:20.264 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:20.264 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:20.264 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.264 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.264 [2024-12-06 15:41:03.371147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.264 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.264 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:20.264 "name": "raid_bdev1", 00:14:20.264 "aliases": [ 00:14:20.264 "b53987c8-0209-4967-9d7c-c11b4f46939f" 00:14:20.264 ], 00:14:20.264 "product_name": "Raid Volume", 00:14:20.264 "block_size": 512, 00:14:20.264 "num_blocks": 253952, 00:14:20.264 "uuid": "b53987c8-0209-4967-9d7c-c11b4f46939f", 00:14:20.264 "assigned_rate_limits": { 00:14:20.264 "rw_ios_per_sec": 0, 00:14:20.264 "rw_mbytes_per_sec": 0, 00:14:20.264 "r_mbytes_per_sec": 0, 00:14:20.264 "w_mbytes_per_sec": 0 00:14:20.264 }, 00:14:20.264 "claimed": false, 00:14:20.264 "zoned": false, 00:14:20.264 "supported_io_types": { 00:14:20.264 "read": true, 00:14:20.264 "write": true, 00:14:20.264 "unmap": true, 00:14:20.264 "flush": true, 00:14:20.264 "reset": true, 00:14:20.264 "nvme_admin": false, 00:14:20.264 "nvme_io": false, 00:14:20.264 "nvme_io_md": false, 00:14:20.264 "write_zeroes": true, 00:14:20.264 "zcopy": false, 00:14:20.264 "get_zone_info": false, 00:14:20.264 "zone_management": false, 00:14:20.264 "zone_append": false, 00:14:20.264 "compare": false, 00:14:20.264 "compare_and_write": false, 00:14:20.264 "abort": false, 00:14:20.264 "seek_hole": false, 00:14:20.264 "seek_data": false, 00:14:20.264 "copy": false, 00:14:20.264 "nvme_iov_md": false 00:14:20.264 }, 00:14:20.264 "memory_domains": [ 00:14:20.264 { 00:14:20.264 "dma_device_id": "system", 00:14:20.264 "dma_device_type": 1 00:14:20.264 }, 00:14:20.264 { 00:14:20.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.264 "dma_device_type": 2 00:14:20.264 }, 00:14:20.264 { 00:14:20.264 "dma_device_id": "system", 00:14:20.264 "dma_device_type": 1 00:14:20.264 }, 00:14:20.264 { 00:14:20.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.264 "dma_device_type": 2 00:14:20.264 }, 00:14:20.264 { 00:14:20.264 "dma_device_id": "system", 00:14:20.264 "dma_device_type": 1 00:14:20.264 }, 00:14:20.264 { 00:14:20.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.264 "dma_device_type": 2 00:14:20.264 }, 00:14:20.264 { 00:14:20.264 "dma_device_id": "system", 00:14:20.264 "dma_device_type": 1 00:14:20.264 }, 00:14:20.264 { 00:14:20.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.264 "dma_device_type": 2 00:14:20.264 } 00:14:20.264 ], 00:14:20.264 "driver_specific": { 00:14:20.264 "raid": { 00:14:20.264 "uuid": "b53987c8-0209-4967-9d7c-c11b4f46939f", 00:14:20.264 "strip_size_kb": 64, 00:14:20.264 "state": "online", 00:14:20.264 "raid_level": "concat", 00:14:20.264 "superblock": true, 00:14:20.264 "num_base_bdevs": 4, 00:14:20.264 "num_base_bdevs_discovered": 4, 00:14:20.265 "num_base_bdevs_operational": 4, 00:14:20.265 "base_bdevs_list": [ 00:14:20.265 { 00:14:20.265 "name": "pt1", 00:14:20.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:20.265 "is_configured": true, 00:14:20.265 "data_offset": 2048, 00:14:20.265 "data_size": 63488 00:14:20.265 }, 00:14:20.265 { 00:14:20.265 "name": "pt2", 00:14:20.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:20.265 "is_configured": true, 00:14:20.265 "data_offset": 2048, 00:14:20.265 "data_size": 63488 00:14:20.265 }, 00:14:20.265 { 00:14:20.265 "name": "pt3", 00:14:20.265 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:20.265 "is_configured": true, 00:14:20.265 "data_offset": 2048, 00:14:20.265 "data_size": 63488 00:14:20.265 }, 00:14:20.265 { 00:14:20.265 "name": "pt4", 00:14:20.265 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:20.265 "is_configured": true, 00:14:20.265 "data_offset": 2048, 00:14:20.265 "data_size": 63488 00:14:20.265 } 00:14:20.265 ] 00:14:20.265 } 00:14:20.265 } 00:14:20.265 }' 00:14:20.265 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:20.265 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:20.265 pt2 00:14:20.265 pt3 00:14:20.265 pt4' 00:14:20.265 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.265 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:20.265 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.265 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:20.265 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.265 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.265 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.265 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:20.525 [2024-12-06 15:41:03.706765] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b53987c8-0209-4967-9d7c-c11b4f46939f 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b53987c8-0209-4967-9d7c-c11b4f46939f ']' 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.525 [2024-12-06 15:41:03.750385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:20.525 [2024-12-06 15:41:03.750425] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:20.525 [2024-12-06 15:41:03.750549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.525 [2024-12-06 15:41:03.750638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:20.525 [2024-12-06 15:41:03.750659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:20.525 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:20.526 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:20.526 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.526 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.526 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.526 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:20.526 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:20.526 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.526 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.786 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.786 [2024-12-06 15:41:03.918350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:20.786 [2024-12-06 15:41:03.920818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:20.786 [2024-12-06 15:41:03.920880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:20.786 [2024-12-06 15:41:03.920918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:20.786 [2024-12-06 15:41:03.920975] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:20.786 [2024-12-06 15:41:03.921036] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:20.786 [2024-12-06 15:41:03.921061] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:20.786 [2024-12-06 15:41:03.921084] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:20.786 [2024-12-06 15:41:03.921100] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:20.786 [2024-12-06 15:41:03.921114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:20.786 request: 00:14:20.786 { 00:14:20.786 "name": "raid_bdev1", 00:14:20.786 "raid_level": "concat", 00:14:20.786 "base_bdevs": [ 00:14:20.786 "malloc1", 00:14:20.786 "malloc2", 00:14:20.786 "malloc3", 00:14:20.786 "malloc4" 00:14:20.786 ], 00:14:20.786 "strip_size_kb": 64, 00:14:20.786 "superblock": false, 00:14:20.786 "method": "bdev_raid_create", 00:14:20.786 "req_id": 1 00:14:20.787 } 00:14:20.787 Got JSON-RPC error response 00:14:20.787 response: 00:14:20.787 { 00:14:20.787 "code": -17, 00:14:20.787 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:20.787 } 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.787 [2024-12-06 15:41:03.982306] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:20.787 [2024-12-06 15:41:03.982384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.787 [2024-12-06 15:41:03.982412] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:20.787 [2024-12-06 15:41:03.982428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.787 [2024-12-06 15:41:03.985590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.787 [2024-12-06 15:41:03.985648] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:20.787 [2024-12-06 15:41:03.985774] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:20.787 [2024-12-06 15:41:03.985867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:20.787 pt1 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.787 15:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.787 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.787 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.787 "name": "raid_bdev1", 00:14:20.787 "uuid": "b53987c8-0209-4967-9d7c-c11b4f46939f", 00:14:20.787 "strip_size_kb": 64, 00:14:20.787 "state": "configuring", 00:14:20.787 "raid_level": "concat", 00:14:20.787 "superblock": true, 00:14:20.787 "num_base_bdevs": 4, 00:14:20.787 "num_base_bdevs_discovered": 1, 00:14:20.787 "num_base_bdevs_operational": 4, 00:14:20.787 "base_bdevs_list": [ 00:14:20.787 { 00:14:20.787 "name": "pt1", 00:14:20.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:20.787 "is_configured": true, 00:14:20.787 "data_offset": 2048, 00:14:20.787 "data_size": 63488 00:14:20.787 }, 00:14:20.787 { 00:14:20.787 "name": null, 00:14:20.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:20.787 "is_configured": false, 00:14:20.787 "data_offset": 2048, 00:14:20.787 "data_size": 63488 00:14:20.787 }, 00:14:20.787 { 00:14:20.787 "name": null, 00:14:20.787 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:20.787 "is_configured": false, 00:14:20.787 "data_offset": 2048, 00:14:20.787 "data_size": 63488 00:14:20.787 }, 00:14:20.787 { 00:14:20.787 "name": null, 00:14:20.787 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:20.787 "is_configured": false, 00:14:20.787 "data_offset": 2048, 00:14:20.787 "data_size": 63488 00:14:20.787 } 00:14:20.787 ] 00:14:20.787 }' 00:14:20.787 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.787 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.356 [2024-12-06 15:41:04.458354] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:21.356 [2024-12-06 15:41:04.458458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.356 [2024-12-06 15:41:04.458487] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:21.356 [2024-12-06 15:41:04.458515] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.356 [2024-12-06 15:41:04.459081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.356 [2024-12-06 15:41:04.459118] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:21.356 [2024-12-06 15:41:04.459226] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:21.356 [2024-12-06 15:41:04.459259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:21.356 pt2 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.356 [2024-12-06 15:41:04.466342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.356 "name": "raid_bdev1", 00:14:21.356 "uuid": "b53987c8-0209-4967-9d7c-c11b4f46939f", 00:14:21.356 "strip_size_kb": 64, 00:14:21.356 "state": "configuring", 00:14:21.356 "raid_level": "concat", 00:14:21.356 "superblock": true, 00:14:21.356 "num_base_bdevs": 4, 00:14:21.356 "num_base_bdevs_discovered": 1, 00:14:21.356 "num_base_bdevs_operational": 4, 00:14:21.356 "base_bdevs_list": [ 00:14:21.356 { 00:14:21.356 "name": "pt1", 00:14:21.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:21.356 "is_configured": true, 00:14:21.356 "data_offset": 2048, 00:14:21.356 "data_size": 63488 00:14:21.356 }, 00:14:21.356 { 00:14:21.356 "name": null, 00:14:21.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.356 "is_configured": false, 00:14:21.356 "data_offset": 0, 00:14:21.356 "data_size": 63488 00:14:21.356 }, 00:14:21.356 { 00:14:21.356 "name": null, 00:14:21.356 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:21.356 "is_configured": false, 00:14:21.356 "data_offset": 2048, 00:14:21.356 "data_size": 63488 00:14:21.356 }, 00:14:21.356 { 00:14:21.356 "name": null, 00:14:21.356 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:21.356 "is_configured": false, 00:14:21.356 "data_offset": 2048, 00:14:21.356 "data_size": 63488 00:14:21.356 } 00:14:21.356 ] 00:14:21.356 }' 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.356 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.616 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:21.616 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:21.616 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:21.616 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.616 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.877 [2024-12-06 15:41:04.910358] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:21.877 [2024-12-06 15:41:04.910456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.877 [2024-12-06 15:41:04.910483] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:21.878 [2024-12-06 15:41:04.910496] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.878 [2024-12-06 15:41:04.911089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.878 [2024-12-06 15:41:04.911127] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:21.878 [2024-12-06 15:41:04.911242] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:21.878 [2024-12-06 15:41:04.911271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:21.878 pt2 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.878 [2024-12-06 15:41:04.922355] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:21.878 [2024-12-06 15:41:04.922444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.878 [2024-12-06 15:41:04.922482] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:21.878 [2024-12-06 15:41:04.922518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.878 [2024-12-06 15:41:04.923126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.878 [2024-12-06 15:41:04.923161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:21.878 [2024-12-06 15:41:04.923279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:21.878 [2024-12-06 15:41:04.923330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:21.878 pt3 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.878 [2024-12-06 15:41:04.934296] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:21.878 [2024-12-06 15:41:04.934362] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.878 [2024-12-06 15:41:04.934388] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:21.878 [2024-12-06 15:41:04.934401] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.878 [2024-12-06 15:41:04.934922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.878 [2024-12-06 15:41:04.934956] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:21.878 [2024-12-06 15:41:04.935048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:21.878 [2024-12-06 15:41:04.935081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:21.878 [2024-12-06 15:41:04.935255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:21.878 [2024-12-06 15:41:04.935272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:21.878 [2024-12-06 15:41:04.935608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:21.878 [2024-12-06 15:41:04.935792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:21.878 [2024-12-06 15:41:04.935811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:21.878 [2024-12-06 15:41:04.935974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.878 pt4 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.878 "name": "raid_bdev1", 00:14:21.878 "uuid": "b53987c8-0209-4967-9d7c-c11b4f46939f", 00:14:21.878 "strip_size_kb": 64, 00:14:21.878 "state": "online", 00:14:21.878 "raid_level": "concat", 00:14:21.878 "superblock": true, 00:14:21.878 "num_base_bdevs": 4, 00:14:21.878 "num_base_bdevs_discovered": 4, 00:14:21.878 "num_base_bdevs_operational": 4, 00:14:21.878 "base_bdevs_list": [ 00:14:21.878 { 00:14:21.878 "name": "pt1", 00:14:21.878 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:21.878 "is_configured": true, 00:14:21.878 "data_offset": 2048, 00:14:21.878 "data_size": 63488 00:14:21.878 }, 00:14:21.878 { 00:14:21.878 "name": "pt2", 00:14:21.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.878 "is_configured": true, 00:14:21.878 "data_offset": 2048, 00:14:21.878 "data_size": 63488 00:14:21.878 }, 00:14:21.878 { 00:14:21.878 "name": "pt3", 00:14:21.878 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:21.878 "is_configured": true, 00:14:21.878 "data_offset": 2048, 00:14:21.878 "data_size": 63488 00:14:21.878 }, 00:14:21.878 { 00:14:21.878 "name": "pt4", 00:14:21.878 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:21.878 "is_configured": true, 00:14:21.878 "data_offset": 2048, 00:14:21.878 "data_size": 63488 00:14:21.878 } 00:14:21.878 ] 00:14:21.878 }' 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.878 15:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.138 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:22.138 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:22.138 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:22.138 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:22.138 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:22.138 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:22.138 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:22.138 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.138 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.138 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:22.138 [2024-12-06 15:41:05.378723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.138 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.138 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:22.138 "name": "raid_bdev1", 00:14:22.138 "aliases": [ 00:14:22.138 "b53987c8-0209-4967-9d7c-c11b4f46939f" 00:14:22.138 ], 00:14:22.138 "product_name": "Raid Volume", 00:14:22.138 "block_size": 512, 00:14:22.138 "num_blocks": 253952, 00:14:22.138 "uuid": "b53987c8-0209-4967-9d7c-c11b4f46939f", 00:14:22.138 "assigned_rate_limits": { 00:14:22.138 "rw_ios_per_sec": 0, 00:14:22.138 "rw_mbytes_per_sec": 0, 00:14:22.138 "r_mbytes_per_sec": 0, 00:14:22.138 "w_mbytes_per_sec": 0 00:14:22.138 }, 00:14:22.138 "claimed": false, 00:14:22.138 "zoned": false, 00:14:22.138 "supported_io_types": { 00:14:22.138 "read": true, 00:14:22.138 "write": true, 00:14:22.138 "unmap": true, 00:14:22.138 "flush": true, 00:14:22.138 "reset": true, 00:14:22.138 "nvme_admin": false, 00:14:22.138 "nvme_io": false, 00:14:22.138 "nvme_io_md": false, 00:14:22.138 "write_zeroes": true, 00:14:22.138 "zcopy": false, 00:14:22.138 "get_zone_info": false, 00:14:22.138 "zone_management": false, 00:14:22.138 "zone_append": false, 00:14:22.138 "compare": false, 00:14:22.138 "compare_and_write": false, 00:14:22.138 "abort": false, 00:14:22.138 "seek_hole": false, 00:14:22.138 "seek_data": false, 00:14:22.138 "copy": false, 00:14:22.138 "nvme_iov_md": false 00:14:22.138 }, 00:14:22.138 "memory_domains": [ 00:14:22.138 { 00:14:22.138 "dma_device_id": "system", 00:14:22.138 "dma_device_type": 1 00:14:22.138 }, 00:14:22.138 { 00:14:22.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.138 "dma_device_type": 2 00:14:22.138 }, 00:14:22.138 { 00:14:22.138 "dma_device_id": "system", 00:14:22.138 "dma_device_type": 1 00:14:22.138 }, 00:14:22.138 { 00:14:22.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.138 "dma_device_type": 2 00:14:22.138 }, 00:14:22.138 { 00:14:22.138 "dma_device_id": "system", 00:14:22.138 "dma_device_type": 1 00:14:22.138 }, 00:14:22.138 { 00:14:22.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.138 "dma_device_type": 2 00:14:22.138 }, 00:14:22.138 { 00:14:22.138 "dma_device_id": "system", 00:14:22.138 "dma_device_type": 1 00:14:22.138 }, 00:14:22.138 { 00:14:22.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.138 "dma_device_type": 2 00:14:22.138 } 00:14:22.138 ], 00:14:22.138 "driver_specific": { 00:14:22.138 "raid": { 00:14:22.138 "uuid": "b53987c8-0209-4967-9d7c-c11b4f46939f", 00:14:22.138 "strip_size_kb": 64, 00:14:22.138 "state": "online", 00:14:22.138 "raid_level": "concat", 00:14:22.138 "superblock": true, 00:14:22.138 "num_base_bdevs": 4, 00:14:22.138 "num_base_bdevs_discovered": 4, 00:14:22.138 "num_base_bdevs_operational": 4, 00:14:22.138 "base_bdevs_list": [ 00:14:22.138 { 00:14:22.138 "name": "pt1", 00:14:22.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:22.138 "is_configured": true, 00:14:22.138 "data_offset": 2048, 00:14:22.138 "data_size": 63488 00:14:22.138 }, 00:14:22.138 { 00:14:22.138 "name": "pt2", 00:14:22.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.138 "is_configured": true, 00:14:22.138 "data_offset": 2048, 00:14:22.138 "data_size": 63488 00:14:22.138 }, 00:14:22.138 { 00:14:22.138 "name": "pt3", 00:14:22.138 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:22.138 "is_configured": true, 00:14:22.138 "data_offset": 2048, 00:14:22.138 "data_size": 63488 00:14:22.138 }, 00:14:22.138 { 00:14:22.138 "name": "pt4", 00:14:22.138 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:22.138 "is_configured": true, 00:14:22.138 "data_offset": 2048, 00:14:22.138 "data_size": 63488 00:14:22.138 } 00:14:22.138 ] 00:14:22.138 } 00:14:22.138 } 00:14:22.138 }' 00:14:22.138 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:22.397 pt2 00:14:22.397 pt3 00:14:22.397 pt4' 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.397 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.656 [2024-12-06 15:41:05.694654] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b53987c8-0209-4967-9d7c-c11b4f46939f '!=' b53987c8-0209-4967-9d7c-c11b4f46939f ']' 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72649 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72649 ']' 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72649 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72649 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.656 killing process with pid 72649 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72649' 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72649 00:14:22.656 [2024-12-06 15:41:05.788725] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:22.656 15:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72649 00:14:22.656 [2024-12-06 15:41:05.788841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.656 [2024-12-06 15:41:05.788936] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.656 [2024-12-06 15:41:05.788952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:23.221 [2024-12-06 15:41:06.236928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.600 15:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:24.600 00:14:24.600 real 0m5.940s 00:14:24.600 user 0m8.234s 00:14:24.600 sys 0m1.278s 00:14:24.600 15:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.600 15:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.600 ************************************ 00:14:24.600 END TEST raid_superblock_test 00:14:24.600 ************************************ 00:14:24.600 15:41:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:14:24.600 15:41:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:24.600 15:41:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.600 15:41:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:24.600 ************************************ 00:14:24.600 START TEST raid_read_error_test 00:14:24.600 ************************************ 00:14:24.600 15:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:14:24.600 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:24.600 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:24.600 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:24.600 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:24.600 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:24.600 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:24.600 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:24.600 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:24.600 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:24.600 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:24.600 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:24.600 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:24.600 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BYK3cwAHhL 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72920 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72920 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72920 ']' 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:24.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:24.601 15:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.601 [2024-12-06 15:41:07.769865] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:14:24.601 [2024-12-06 15:41:07.769997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72920 ] 00:14:24.860 [2024-12-06 15:41:07.938453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.860 [2024-12-06 15:41:08.081956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.119 [2024-12-06 15:41:08.357784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.119 [2024-12-06 15:41:08.357860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.685 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.685 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:25.685 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.686 BaseBdev1_malloc 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.686 true 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.686 [2024-12-06 15:41:08.742958] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:25.686 [2024-12-06 15:41:08.743036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.686 [2024-12-06 15:41:08.743064] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:25.686 [2024-12-06 15:41:08.743080] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.686 [2024-12-06 15:41:08.745946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.686 [2024-12-06 15:41:08.745993] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:25.686 BaseBdev1 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.686 BaseBdev2_malloc 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.686 true 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.686 [2024-12-06 15:41:08.808792] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:25.686 [2024-12-06 15:41:08.808852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.686 [2024-12-06 15:41:08.808873] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:25.686 [2024-12-06 15:41:08.808889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.686 [2024-12-06 15:41:08.811742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.686 [2024-12-06 15:41:08.811794] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:25.686 BaseBdev2 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.686 BaseBdev3_malloc 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.686 true 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.686 [2024-12-06 15:41:08.901863] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:25.686 [2024-12-06 15:41:08.901919] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.686 [2024-12-06 15:41:08.901940] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:25.686 [2024-12-06 15:41:08.901956] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.686 [2024-12-06 15:41:08.904790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.686 [2024-12-06 15:41:08.904835] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:25.686 BaseBdev3 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.686 BaseBdev4_malloc 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.686 true 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.686 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.686 [2024-12-06 15:41:08.977413] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:25.686 [2024-12-06 15:41:08.977470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.686 [2024-12-06 15:41:08.977493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:25.686 [2024-12-06 15:41:08.977524] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.945 [2024-12-06 15:41:08.980359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.945 [2024-12-06 15:41:08.980407] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:25.945 BaseBdev4 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.945 [2024-12-06 15:41:08.989483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.945 [2024-12-06 15:41:08.991993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.945 [2024-12-06 15:41:08.992076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.945 [2024-12-06 15:41:08.992146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:25.945 [2024-12-06 15:41:08.992398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:25.945 [2024-12-06 15:41:08.992417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:25.945 [2024-12-06 15:41:08.992719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:25.945 [2024-12-06 15:41:08.992908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:25.945 [2024-12-06 15:41:08.992923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:25.945 [2024-12-06 15:41:08.993102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.945 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.946 15:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.946 15:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.946 15:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.946 15:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.946 "name": "raid_bdev1", 00:14:25.946 "uuid": "7852e54c-b583-4714-b359-d86737737734", 00:14:25.946 "strip_size_kb": 64, 00:14:25.946 "state": "online", 00:14:25.946 "raid_level": "concat", 00:14:25.946 "superblock": true, 00:14:25.946 "num_base_bdevs": 4, 00:14:25.946 "num_base_bdevs_discovered": 4, 00:14:25.946 "num_base_bdevs_operational": 4, 00:14:25.946 "base_bdevs_list": [ 00:14:25.946 { 00:14:25.946 "name": "BaseBdev1", 00:14:25.946 "uuid": "44ab5537-9d4b-57d3-aa32-33e8cbcc9f72", 00:14:25.946 "is_configured": true, 00:14:25.946 "data_offset": 2048, 00:14:25.946 "data_size": 63488 00:14:25.946 }, 00:14:25.946 { 00:14:25.946 "name": "BaseBdev2", 00:14:25.946 "uuid": "5ebdba89-f38e-5775-bbd5-5f15ea48d281", 00:14:25.946 "is_configured": true, 00:14:25.946 "data_offset": 2048, 00:14:25.946 "data_size": 63488 00:14:25.946 }, 00:14:25.946 { 00:14:25.946 "name": "BaseBdev3", 00:14:25.946 "uuid": "de384463-6955-57b0-9d00-33b70db0db77", 00:14:25.946 "is_configured": true, 00:14:25.946 "data_offset": 2048, 00:14:25.946 "data_size": 63488 00:14:25.946 }, 00:14:25.946 { 00:14:25.946 "name": "BaseBdev4", 00:14:25.946 "uuid": "abacbec6-398f-5422-a419-0af41667804e", 00:14:25.946 "is_configured": true, 00:14:25.946 "data_offset": 2048, 00:14:25.946 "data_size": 63488 00:14:25.946 } 00:14:25.946 ] 00:14:25.946 }' 00:14:25.946 15:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.946 15:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.205 15:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:26.205 15:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:26.464 [2024-12-06 15:41:09.558184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.403 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.404 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.404 "name": "raid_bdev1", 00:14:27.404 "uuid": "7852e54c-b583-4714-b359-d86737737734", 00:14:27.404 "strip_size_kb": 64, 00:14:27.404 "state": "online", 00:14:27.404 "raid_level": "concat", 00:14:27.404 "superblock": true, 00:14:27.404 "num_base_bdevs": 4, 00:14:27.404 "num_base_bdevs_discovered": 4, 00:14:27.404 "num_base_bdevs_operational": 4, 00:14:27.404 "base_bdevs_list": [ 00:14:27.404 { 00:14:27.404 "name": "BaseBdev1", 00:14:27.404 "uuid": "44ab5537-9d4b-57d3-aa32-33e8cbcc9f72", 00:14:27.404 "is_configured": true, 00:14:27.404 "data_offset": 2048, 00:14:27.404 "data_size": 63488 00:14:27.404 }, 00:14:27.404 { 00:14:27.404 "name": "BaseBdev2", 00:14:27.404 "uuid": "5ebdba89-f38e-5775-bbd5-5f15ea48d281", 00:14:27.404 "is_configured": true, 00:14:27.404 "data_offset": 2048, 00:14:27.404 "data_size": 63488 00:14:27.404 }, 00:14:27.404 { 00:14:27.404 "name": "BaseBdev3", 00:14:27.404 "uuid": "de384463-6955-57b0-9d00-33b70db0db77", 00:14:27.404 "is_configured": true, 00:14:27.404 "data_offset": 2048, 00:14:27.404 "data_size": 63488 00:14:27.404 }, 00:14:27.404 { 00:14:27.404 "name": "BaseBdev4", 00:14:27.404 "uuid": "abacbec6-398f-5422-a419-0af41667804e", 00:14:27.404 "is_configured": true, 00:14:27.404 "data_offset": 2048, 00:14:27.404 "data_size": 63488 00:14:27.404 } 00:14:27.404 ] 00:14:27.404 }' 00:14:27.404 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.404 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.663 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:27.664 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.664 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.664 [2024-12-06 15:41:10.935660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.664 [2024-12-06 15:41:10.935703] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.664 [2024-12-06 15:41:10.938473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.664 [2024-12-06 15:41:10.938571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.664 [2024-12-06 15:41:10.938627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.664 [2024-12-06 15:41:10.938647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:27.664 { 00:14:27.664 "results": [ 00:14:27.664 { 00:14:27.664 "job": "raid_bdev1", 00:14:27.664 "core_mask": "0x1", 00:14:27.664 "workload": "randrw", 00:14:27.664 "percentage": 50, 00:14:27.664 "status": "finished", 00:14:27.664 "queue_depth": 1, 00:14:27.664 "io_size": 131072, 00:14:27.664 "runtime": 1.377227, 00:14:27.664 "iops": 13063.205992911844, 00:14:27.664 "mibps": 1632.9007491139805, 00:14:27.664 "io_failed": 1, 00:14:27.664 "io_timeout": 0, 00:14:27.664 "avg_latency_us": 107.24633420297464, 00:14:27.664 "min_latency_us": 27.142168674698794, 00:14:27.664 "max_latency_us": 1585.760642570281 00:14:27.664 } 00:14:27.664 ], 00:14:27.664 "core_count": 1 00:14:27.664 } 00:14:27.664 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.664 15:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72920 00:14:27.664 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72920 ']' 00:14:27.664 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72920 00:14:27.664 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:27.664 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.664 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72920 00:14:27.925 killing process with pid 72920 00:14:27.925 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.925 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.925 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72920' 00:14:27.925 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72920 00:14:27.925 [2024-12-06 15:41:10.988407] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.925 15:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72920 00:14:28.187 [2024-12-06 15:41:11.348469] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.577 15:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:29.577 15:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BYK3cwAHhL 00:14:29.577 15:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:29.577 15:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:14:29.577 15:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:29.577 15:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:29.577 15:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:29.577 15:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:14:29.577 00:14:29.577 real 0m5.008s 00:14:29.577 user 0m5.832s 00:14:29.577 sys 0m0.746s 00:14:29.577 ************************************ 00:14:29.577 END TEST raid_read_error_test 00:14:29.577 15:41:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.577 15:41:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.577 ************************************ 00:14:29.577 15:41:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:14:29.577 15:41:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:29.577 15:41:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.577 15:41:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.577 ************************************ 00:14:29.577 START TEST raid_write_error_test 00:14:29.577 ************************************ 00:14:29.577 15:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:14:29.577 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:29.577 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:29.577 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:29.577 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:29.577 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:29.577 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:29.577 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:29.577 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:29.577 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:29.577 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:29.577 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:29.577 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WWIdnaNopu 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73066 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73066 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73066 ']' 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.578 15:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.837 [2024-12-06 15:41:12.884557] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:14:29.837 [2024-12-06 15:41:12.884689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73066 ] 00:14:29.837 [2024-12-06 15:41:13.072055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.097 [2024-12-06 15:41:13.218602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.356 [2024-12-06 15:41:13.472323] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.356 [2024-12-06 15:41:13.472394] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.615 BaseBdev1_malloc 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.615 true 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.615 [2024-12-06 15:41:13.849677] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:30.615 [2024-12-06 15:41:13.849752] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.615 [2024-12-06 15:41:13.849778] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:30.615 [2024-12-06 15:41:13.849794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.615 [2024-12-06 15:41:13.852538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.615 [2024-12-06 15:41:13.852584] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:30.615 BaseBdev1 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.615 BaseBdev2_malloc 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.615 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.875 true 00:14:30.875 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.875 15:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:30.875 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.875 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.875 [2024-12-06 15:41:13.921522] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:30.875 [2024-12-06 15:41:13.921585] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.875 [2024-12-06 15:41:13.921604] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:30.875 [2024-12-06 15:41:13.921619] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.875 [2024-12-06 15:41:13.924295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.875 [2024-12-06 15:41:13.924342] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:30.875 BaseBdev2 00:14:30.875 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.875 15:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:30.875 15:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:30.875 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.875 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.875 BaseBdev3_malloc 00:14:30.875 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.875 15:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:30.875 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.875 15:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.875 true 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.875 [2024-12-06 15:41:14.010246] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:30.875 [2024-12-06 15:41:14.010307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.875 [2024-12-06 15:41:14.010329] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:30.875 [2024-12-06 15:41:14.010344] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.875 [2024-12-06 15:41:14.013060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.875 [2024-12-06 15:41:14.013115] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:30.875 BaseBdev3 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.875 BaseBdev4_malloc 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.875 true 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.875 [2024-12-06 15:41:14.086175] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:30.875 [2024-12-06 15:41:14.086230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.875 [2024-12-06 15:41:14.086252] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:30.875 [2024-12-06 15:41:14.086267] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.875 [2024-12-06 15:41:14.088935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.875 [2024-12-06 15:41:14.088982] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:30.875 BaseBdev4 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.875 [2024-12-06 15:41:14.098242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.875 [2024-12-06 15:41:14.100737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.875 [2024-12-06 15:41:14.100841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.875 [2024-12-06 15:41:14.100910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:30.875 [2024-12-06 15:41:14.101155] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:30.875 [2024-12-06 15:41:14.101174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:30.875 [2024-12-06 15:41:14.101449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:30.875 [2024-12-06 15:41:14.101674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:30.875 [2024-12-06 15:41:14.101695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:30.875 [2024-12-06 15:41:14.101863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.875 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.875 "name": "raid_bdev1", 00:14:30.876 "uuid": "c40117cb-654b-44ba-bcd4-2ca6f0361ada", 00:14:30.876 "strip_size_kb": 64, 00:14:30.876 "state": "online", 00:14:30.876 "raid_level": "concat", 00:14:30.876 "superblock": true, 00:14:30.876 "num_base_bdevs": 4, 00:14:30.876 "num_base_bdevs_discovered": 4, 00:14:30.876 "num_base_bdevs_operational": 4, 00:14:30.876 "base_bdevs_list": [ 00:14:30.876 { 00:14:30.876 "name": "BaseBdev1", 00:14:30.876 "uuid": "b5e3d604-56b6-5248-b4f9-7717dfb4430d", 00:14:30.876 "is_configured": true, 00:14:30.876 "data_offset": 2048, 00:14:30.876 "data_size": 63488 00:14:30.876 }, 00:14:30.876 { 00:14:30.876 "name": "BaseBdev2", 00:14:30.876 "uuid": "e56d3d35-65b8-5151-b81c-0ccda3d6a4fc", 00:14:30.876 "is_configured": true, 00:14:30.876 "data_offset": 2048, 00:14:30.876 "data_size": 63488 00:14:30.876 }, 00:14:30.876 { 00:14:30.876 "name": "BaseBdev3", 00:14:30.876 "uuid": "11405dea-fd41-5de3-ae82-6da6952830cc", 00:14:30.876 "is_configured": true, 00:14:30.876 "data_offset": 2048, 00:14:30.876 "data_size": 63488 00:14:30.876 }, 00:14:30.876 { 00:14:30.876 "name": "BaseBdev4", 00:14:30.876 "uuid": "b2ac27e6-6513-532a-880b-d9c219916066", 00:14:30.876 "is_configured": true, 00:14:30.876 "data_offset": 2048, 00:14:30.876 "data_size": 63488 00:14:30.876 } 00:14:30.876 ] 00:14:30.876 }' 00:14:30.876 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.876 15:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.443 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:31.443 15:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:31.443 [2024-12-06 15:41:14.647408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.384 "name": "raid_bdev1", 00:14:32.384 "uuid": "c40117cb-654b-44ba-bcd4-2ca6f0361ada", 00:14:32.384 "strip_size_kb": 64, 00:14:32.384 "state": "online", 00:14:32.384 "raid_level": "concat", 00:14:32.384 "superblock": true, 00:14:32.384 "num_base_bdevs": 4, 00:14:32.384 "num_base_bdevs_discovered": 4, 00:14:32.384 "num_base_bdevs_operational": 4, 00:14:32.384 "base_bdevs_list": [ 00:14:32.384 { 00:14:32.384 "name": "BaseBdev1", 00:14:32.384 "uuid": "b5e3d604-56b6-5248-b4f9-7717dfb4430d", 00:14:32.384 "is_configured": true, 00:14:32.384 "data_offset": 2048, 00:14:32.384 "data_size": 63488 00:14:32.384 }, 00:14:32.384 { 00:14:32.384 "name": "BaseBdev2", 00:14:32.384 "uuid": "e56d3d35-65b8-5151-b81c-0ccda3d6a4fc", 00:14:32.384 "is_configured": true, 00:14:32.384 "data_offset": 2048, 00:14:32.384 "data_size": 63488 00:14:32.384 }, 00:14:32.384 { 00:14:32.384 "name": "BaseBdev3", 00:14:32.384 "uuid": "11405dea-fd41-5de3-ae82-6da6952830cc", 00:14:32.384 "is_configured": true, 00:14:32.384 "data_offset": 2048, 00:14:32.384 "data_size": 63488 00:14:32.384 }, 00:14:32.384 { 00:14:32.384 "name": "BaseBdev4", 00:14:32.384 "uuid": "b2ac27e6-6513-532a-880b-d9c219916066", 00:14:32.384 "is_configured": true, 00:14:32.384 "data_offset": 2048, 00:14:32.384 "data_size": 63488 00:14:32.384 } 00:14:32.384 ] 00:14:32.384 }' 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.384 15:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.951 15:41:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:32.951 15:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.951 15:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.951 [2024-12-06 15:41:16.021087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.951 [2024-12-06 15:41:16.021134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.951 [2024-12-06 15:41:16.023852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.951 [2024-12-06 15:41:16.023930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.951 [2024-12-06 15:41:16.023981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.951 [2024-12-06 15:41:16.024010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:32.951 { 00:14:32.951 "results": [ 00:14:32.951 { 00:14:32.951 "job": "raid_bdev1", 00:14:32.951 "core_mask": "0x1", 00:14:32.951 "workload": "randrw", 00:14:32.951 "percentage": 50, 00:14:32.951 "status": "finished", 00:14:32.951 "queue_depth": 1, 00:14:32.951 "io_size": 131072, 00:14:32.951 "runtime": 1.373412, 00:14:32.951 "iops": 13187.594108686979, 00:14:32.951 "mibps": 1648.4492635858724, 00:14:32.951 "io_failed": 1, 00:14:32.951 "io_timeout": 0, 00:14:32.951 "avg_latency_us": 106.33486157959283, 00:14:32.951 "min_latency_us": 27.553413654618474, 00:14:32.951 "max_latency_us": 1519.9614457831326 00:14:32.951 } 00:14:32.951 ], 00:14:32.951 "core_count": 1 00:14:32.951 } 00:14:32.951 15:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.951 15:41:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73066 00:14:32.951 15:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73066 ']' 00:14:32.951 15:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73066 00:14:32.951 15:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:32.951 15:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.951 15:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73066 00:14:32.951 killing process with pid 73066 00:14:32.951 15:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.951 15:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.951 15:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73066' 00:14:32.951 15:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73066 00:14:32.951 [2024-12-06 15:41:16.070832] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.951 15:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73066 00:14:33.210 [2024-12-06 15:41:16.427891] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.588 15:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WWIdnaNopu 00:14:34.588 15:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:34.588 15:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:34.588 15:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:14:34.588 15:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:34.588 15:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:34.588 15:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:34.588 15:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:14:34.588 00:14:34.588 real 0m5.002s 00:14:34.588 user 0m5.760s 00:14:34.588 sys 0m0.779s 00:14:34.588 15:41:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.588 15:41:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.588 ************************************ 00:14:34.588 END TEST raid_write_error_test 00:14:34.588 ************************************ 00:14:34.588 15:41:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:34.588 15:41:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:14:34.588 15:41:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:34.588 15:41:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.588 15:41:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.588 ************************************ 00:14:34.588 START TEST raid_state_function_test 00:14:34.588 ************************************ 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73215 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73215' 00:14:34.588 Process raid pid: 73215 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73215 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73215 ']' 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.588 15:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.847 [2024-12-06 15:41:17.972002] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:14:34.847 [2024-12-06 15:41:17.972201] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.131 [2024-12-06 15:41:18.177656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.131 [2024-12-06 15:41:18.335602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.389 [2024-12-06 15:41:18.588454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.389 [2024-12-06 15:41:18.588517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.654 [2024-12-06 15:41:18.884874] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.654 [2024-12-06 15:41:18.884967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.654 [2024-12-06 15:41:18.884996] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.654 [2024-12-06 15:41:18.885018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.654 [2024-12-06 15:41:18.885030] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:35.654 [2024-12-06 15:41:18.885050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:35.654 [2024-12-06 15:41:18.885062] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:35.654 [2024-12-06 15:41:18.885081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.654 "name": "Existed_Raid", 00:14:35.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.654 "strip_size_kb": 0, 00:14:35.654 "state": "configuring", 00:14:35.654 "raid_level": "raid1", 00:14:35.654 "superblock": false, 00:14:35.654 "num_base_bdevs": 4, 00:14:35.654 "num_base_bdevs_discovered": 0, 00:14:35.654 "num_base_bdevs_operational": 4, 00:14:35.654 "base_bdevs_list": [ 00:14:35.654 { 00:14:35.654 "name": "BaseBdev1", 00:14:35.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.654 "is_configured": false, 00:14:35.654 "data_offset": 0, 00:14:35.654 "data_size": 0 00:14:35.654 }, 00:14:35.654 { 00:14:35.654 "name": "BaseBdev2", 00:14:35.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.654 "is_configured": false, 00:14:35.654 "data_offset": 0, 00:14:35.654 "data_size": 0 00:14:35.654 }, 00:14:35.654 { 00:14:35.654 "name": "BaseBdev3", 00:14:35.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.654 "is_configured": false, 00:14:35.654 "data_offset": 0, 00:14:35.654 "data_size": 0 00:14:35.654 }, 00:14:35.654 { 00:14:35.654 "name": "BaseBdev4", 00:14:35.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.654 "is_configured": false, 00:14:35.654 "data_offset": 0, 00:14:35.654 "data_size": 0 00:14:35.654 } 00:14:35.654 ] 00:14:35.654 }' 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.654 15:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.221 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:36.221 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.221 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.221 [2024-12-06 15:41:19.376754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:36.221 [2024-12-06 15:41:19.376811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:36.221 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.221 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:36.221 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.221 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.221 [2024-12-06 15:41:19.388718] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.221 [2024-12-06 15:41:19.388781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.221 [2024-12-06 15:41:19.388794] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.221 [2024-12-06 15:41:19.388809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.221 [2024-12-06 15:41:19.388818] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:36.222 [2024-12-06 15:41:19.388832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:36.222 [2024-12-06 15:41:19.388840] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:36.222 [2024-12-06 15:41:19.388854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.222 [2024-12-06 15:41:19.449186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.222 BaseBdev1 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.222 [ 00:14:36.222 { 00:14:36.222 "name": "BaseBdev1", 00:14:36.222 "aliases": [ 00:14:36.222 "77f71106-f6bb-4aa1-9c8b-32f001d45e9c" 00:14:36.222 ], 00:14:36.222 "product_name": "Malloc disk", 00:14:36.222 "block_size": 512, 00:14:36.222 "num_blocks": 65536, 00:14:36.222 "uuid": "77f71106-f6bb-4aa1-9c8b-32f001d45e9c", 00:14:36.222 "assigned_rate_limits": { 00:14:36.222 "rw_ios_per_sec": 0, 00:14:36.222 "rw_mbytes_per_sec": 0, 00:14:36.222 "r_mbytes_per_sec": 0, 00:14:36.222 "w_mbytes_per_sec": 0 00:14:36.222 }, 00:14:36.222 "claimed": true, 00:14:36.222 "claim_type": "exclusive_write", 00:14:36.222 "zoned": false, 00:14:36.222 "supported_io_types": { 00:14:36.222 "read": true, 00:14:36.222 "write": true, 00:14:36.222 "unmap": true, 00:14:36.222 "flush": true, 00:14:36.222 "reset": true, 00:14:36.222 "nvme_admin": false, 00:14:36.222 "nvme_io": false, 00:14:36.222 "nvme_io_md": false, 00:14:36.222 "write_zeroes": true, 00:14:36.222 "zcopy": true, 00:14:36.222 "get_zone_info": false, 00:14:36.222 "zone_management": false, 00:14:36.222 "zone_append": false, 00:14:36.222 "compare": false, 00:14:36.222 "compare_and_write": false, 00:14:36.222 "abort": true, 00:14:36.222 "seek_hole": false, 00:14:36.222 "seek_data": false, 00:14:36.222 "copy": true, 00:14:36.222 "nvme_iov_md": false 00:14:36.222 }, 00:14:36.222 "memory_domains": [ 00:14:36.222 { 00:14:36.222 "dma_device_id": "system", 00:14:36.222 "dma_device_type": 1 00:14:36.222 }, 00:14:36.222 { 00:14:36.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.222 "dma_device_type": 2 00:14:36.222 } 00:14:36.222 ], 00:14:36.222 "driver_specific": {} 00:14:36.222 } 00:14:36.222 ] 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.222 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.480 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.480 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.480 "name": "Existed_Raid", 00:14:36.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.480 "strip_size_kb": 0, 00:14:36.480 "state": "configuring", 00:14:36.480 "raid_level": "raid1", 00:14:36.480 "superblock": false, 00:14:36.480 "num_base_bdevs": 4, 00:14:36.480 "num_base_bdevs_discovered": 1, 00:14:36.480 "num_base_bdevs_operational": 4, 00:14:36.480 "base_bdevs_list": [ 00:14:36.480 { 00:14:36.480 "name": "BaseBdev1", 00:14:36.480 "uuid": "77f71106-f6bb-4aa1-9c8b-32f001d45e9c", 00:14:36.480 "is_configured": true, 00:14:36.480 "data_offset": 0, 00:14:36.480 "data_size": 65536 00:14:36.480 }, 00:14:36.480 { 00:14:36.480 "name": "BaseBdev2", 00:14:36.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.480 "is_configured": false, 00:14:36.480 "data_offset": 0, 00:14:36.480 "data_size": 0 00:14:36.480 }, 00:14:36.480 { 00:14:36.480 "name": "BaseBdev3", 00:14:36.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.480 "is_configured": false, 00:14:36.480 "data_offset": 0, 00:14:36.480 "data_size": 0 00:14:36.480 }, 00:14:36.480 { 00:14:36.480 "name": "BaseBdev4", 00:14:36.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.480 "is_configured": false, 00:14:36.480 "data_offset": 0, 00:14:36.480 "data_size": 0 00:14:36.480 } 00:14:36.480 ] 00:14:36.480 }' 00:14:36.480 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.480 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.738 [2024-12-06 15:41:19.940689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:36.738 [2024-12-06 15:41:19.940765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.738 [2024-12-06 15:41:19.952719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.738 [2024-12-06 15:41:19.955225] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.738 [2024-12-06 15:41:19.955281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.738 [2024-12-06 15:41:19.955293] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:36.738 [2024-12-06 15:41:19.955308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:36.738 [2024-12-06 15:41:19.955317] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:36.738 [2024-12-06 15:41:19.955330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.738 15:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.738 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.738 "name": "Existed_Raid", 00:14:36.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.738 "strip_size_kb": 0, 00:14:36.738 "state": "configuring", 00:14:36.738 "raid_level": "raid1", 00:14:36.738 "superblock": false, 00:14:36.738 "num_base_bdevs": 4, 00:14:36.738 "num_base_bdevs_discovered": 1, 00:14:36.738 "num_base_bdevs_operational": 4, 00:14:36.738 "base_bdevs_list": [ 00:14:36.738 { 00:14:36.738 "name": "BaseBdev1", 00:14:36.738 "uuid": "77f71106-f6bb-4aa1-9c8b-32f001d45e9c", 00:14:36.738 "is_configured": true, 00:14:36.738 "data_offset": 0, 00:14:36.738 "data_size": 65536 00:14:36.738 }, 00:14:36.738 { 00:14:36.738 "name": "BaseBdev2", 00:14:36.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.738 "is_configured": false, 00:14:36.738 "data_offset": 0, 00:14:36.738 "data_size": 0 00:14:36.738 }, 00:14:36.738 { 00:14:36.738 "name": "BaseBdev3", 00:14:36.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.738 "is_configured": false, 00:14:36.738 "data_offset": 0, 00:14:36.738 "data_size": 0 00:14:36.738 }, 00:14:36.738 { 00:14:36.738 "name": "BaseBdev4", 00:14:36.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.738 "is_configured": false, 00:14:36.738 "data_offset": 0, 00:14:36.738 "data_size": 0 00:14:36.738 } 00:14:36.738 ] 00:14:36.738 }' 00:14:36.738 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.738 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.306 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:37.306 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.306 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.306 [2024-12-06 15:41:20.457143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.306 BaseBdev2 00:14:37.306 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.306 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:37.306 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:37.306 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:37.306 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:37.306 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:37.306 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:37.306 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:37.306 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.307 [ 00:14:37.307 { 00:14:37.307 "name": "BaseBdev2", 00:14:37.307 "aliases": [ 00:14:37.307 "d1ef4448-e49f-41c4-8ab8-1cae417bef24" 00:14:37.307 ], 00:14:37.307 "product_name": "Malloc disk", 00:14:37.307 "block_size": 512, 00:14:37.307 "num_blocks": 65536, 00:14:37.307 "uuid": "d1ef4448-e49f-41c4-8ab8-1cae417bef24", 00:14:37.307 "assigned_rate_limits": { 00:14:37.307 "rw_ios_per_sec": 0, 00:14:37.307 "rw_mbytes_per_sec": 0, 00:14:37.307 "r_mbytes_per_sec": 0, 00:14:37.307 "w_mbytes_per_sec": 0 00:14:37.307 }, 00:14:37.307 "claimed": true, 00:14:37.307 "claim_type": "exclusive_write", 00:14:37.307 "zoned": false, 00:14:37.307 "supported_io_types": { 00:14:37.307 "read": true, 00:14:37.307 "write": true, 00:14:37.307 "unmap": true, 00:14:37.307 "flush": true, 00:14:37.307 "reset": true, 00:14:37.307 "nvme_admin": false, 00:14:37.307 "nvme_io": false, 00:14:37.307 "nvme_io_md": false, 00:14:37.307 "write_zeroes": true, 00:14:37.307 "zcopy": true, 00:14:37.307 "get_zone_info": false, 00:14:37.307 "zone_management": false, 00:14:37.307 "zone_append": false, 00:14:37.307 "compare": false, 00:14:37.307 "compare_and_write": false, 00:14:37.307 "abort": true, 00:14:37.307 "seek_hole": false, 00:14:37.307 "seek_data": false, 00:14:37.307 "copy": true, 00:14:37.307 "nvme_iov_md": false 00:14:37.307 }, 00:14:37.307 "memory_domains": [ 00:14:37.307 { 00:14:37.307 "dma_device_id": "system", 00:14:37.307 "dma_device_type": 1 00:14:37.307 }, 00:14:37.307 { 00:14:37.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.307 "dma_device_type": 2 00:14:37.307 } 00:14:37.307 ], 00:14:37.307 "driver_specific": {} 00:14:37.307 } 00:14:37.307 ] 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.307 "name": "Existed_Raid", 00:14:37.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.307 "strip_size_kb": 0, 00:14:37.307 "state": "configuring", 00:14:37.307 "raid_level": "raid1", 00:14:37.307 "superblock": false, 00:14:37.307 "num_base_bdevs": 4, 00:14:37.307 "num_base_bdevs_discovered": 2, 00:14:37.307 "num_base_bdevs_operational": 4, 00:14:37.307 "base_bdevs_list": [ 00:14:37.307 { 00:14:37.307 "name": "BaseBdev1", 00:14:37.307 "uuid": "77f71106-f6bb-4aa1-9c8b-32f001d45e9c", 00:14:37.307 "is_configured": true, 00:14:37.307 "data_offset": 0, 00:14:37.307 "data_size": 65536 00:14:37.307 }, 00:14:37.307 { 00:14:37.307 "name": "BaseBdev2", 00:14:37.307 "uuid": "d1ef4448-e49f-41c4-8ab8-1cae417bef24", 00:14:37.307 "is_configured": true, 00:14:37.307 "data_offset": 0, 00:14:37.307 "data_size": 65536 00:14:37.307 }, 00:14:37.307 { 00:14:37.307 "name": "BaseBdev3", 00:14:37.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.307 "is_configured": false, 00:14:37.307 "data_offset": 0, 00:14:37.307 "data_size": 0 00:14:37.307 }, 00:14:37.307 { 00:14:37.307 "name": "BaseBdev4", 00:14:37.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.307 "is_configured": false, 00:14:37.307 "data_offset": 0, 00:14:37.307 "data_size": 0 00:14:37.307 } 00:14:37.307 ] 00:14:37.307 }' 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.307 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.875 15:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:37.875 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.875 15:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.875 [2024-12-06 15:41:21.017316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.875 BaseBdev3 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.875 [ 00:14:37.875 { 00:14:37.875 "name": "BaseBdev3", 00:14:37.875 "aliases": [ 00:14:37.875 "9d6a2561-79f9-41b3-89ea-81bed155907e" 00:14:37.875 ], 00:14:37.875 "product_name": "Malloc disk", 00:14:37.875 "block_size": 512, 00:14:37.875 "num_blocks": 65536, 00:14:37.875 "uuid": "9d6a2561-79f9-41b3-89ea-81bed155907e", 00:14:37.875 "assigned_rate_limits": { 00:14:37.875 "rw_ios_per_sec": 0, 00:14:37.875 "rw_mbytes_per_sec": 0, 00:14:37.875 "r_mbytes_per_sec": 0, 00:14:37.875 "w_mbytes_per_sec": 0 00:14:37.875 }, 00:14:37.875 "claimed": true, 00:14:37.875 "claim_type": "exclusive_write", 00:14:37.875 "zoned": false, 00:14:37.875 "supported_io_types": { 00:14:37.875 "read": true, 00:14:37.875 "write": true, 00:14:37.875 "unmap": true, 00:14:37.875 "flush": true, 00:14:37.875 "reset": true, 00:14:37.875 "nvme_admin": false, 00:14:37.875 "nvme_io": false, 00:14:37.875 "nvme_io_md": false, 00:14:37.875 "write_zeroes": true, 00:14:37.875 "zcopy": true, 00:14:37.875 "get_zone_info": false, 00:14:37.875 "zone_management": false, 00:14:37.875 "zone_append": false, 00:14:37.875 "compare": false, 00:14:37.875 "compare_and_write": false, 00:14:37.875 "abort": true, 00:14:37.875 "seek_hole": false, 00:14:37.875 "seek_data": false, 00:14:37.875 "copy": true, 00:14:37.875 "nvme_iov_md": false 00:14:37.875 }, 00:14:37.875 "memory_domains": [ 00:14:37.875 { 00:14:37.875 "dma_device_id": "system", 00:14:37.875 "dma_device_type": 1 00:14:37.875 }, 00:14:37.875 { 00:14:37.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.875 "dma_device_type": 2 00:14:37.875 } 00:14:37.875 ], 00:14:37.875 "driver_specific": {} 00:14:37.875 } 00:14:37.875 ] 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.875 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.876 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.876 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.876 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.876 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.876 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.876 "name": "Existed_Raid", 00:14:37.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.876 "strip_size_kb": 0, 00:14:37.876 "state": "configuring", 00:14:37.876 "raid_level": "raid1", 00:14:37.876 "superblock": false, 00:14:37.876 "num_base_bdevs": 4, 00:14:37.876 "num_base_bdevs_discovered": 3, 00:14:37.876 "num_base_bdevs_operational": 4, 00:14:37.876 "base_bdevs_list": [ 00:14:37.876 { 00:14:37.876 "name": "BaseBdev1", 00:14:37.876 "uuid": "77f71106-f6bb-4aa1-9c8b-32f001d45e9c", 00:14:37.876 "is_configured": true, 00:14:37.876 "data_offset": 0, 00:14:37.876 "data_size": 65536 00:14:37.876 }, 00:14:37.876 { 00:14:37.876 "name": "BaseBdev2", 00:14:37.876 "uuid": "d1ef4448-e49f-41c4-8ab8-1cae417bef24", 00:14:37.876 "is_configured": true, 00:14:37.876 "data_offset": 0, 00:14:37.876 "data_size": 65536 00:14:37.876 }, 00:14:37.876 { 00:14:37.876 "name": "BaseBdev3", 00:14:37.876 "uuid": "9d6a2561-79f9-41b3-89ea-81bed155907e", 00:14:37.876 "is_configured": true, 00:14:37.876 "data_offset": 0, 00:14:37.876 "data_size": 65536 00:14:37.876 }, 00:14:37.876 { 00:14:37.876 "name": "BaseBdev4", 00:14:37.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.876 "is_configured": false, 00:14:37.876 "data_offset": 0, 00:14:37.876 "data_size": 0 00:14:37.876 } 00:14:37.876 ] 00:14:37.876 }' 00:14:37.876 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.876 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.444 [2024-12-06 15:41:21.551255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:38.444 [2024-12-06 15:41:21.551522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:38.444 [2024-12-06 15:41:21.551546] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:38.444 [2024-12-06 15:41:21.551923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:38.444 [2024-12-06 15:41:21.552145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:38.444 [2024-12-06 15:41:21.552163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:38.444 [2024-12-06 15:41:21.552473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.444 BaseBdev4 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.444 [ 00:14:38.444 { 00:14:38.444 "name": "BaseBdev4", 00:14:38.444 "aliases": [ 00:14:38.444 "4ebb10d7-9403-4dbd-b37e-958d124126e5" 00:14:38.444 ], 00:14:38.444 "product_name": "Malloc disk", 00:14:38.444 "block_size": 512, 00:14:38.444 "num_blocks": 65536, 00:14:38.444 "uuid": "4ebb10d7-9403-4dbd-b37e-958d124126e5", 00:14:38.444 "assigned_rate_limits": { 00:14:38.444 "rw_ios_per_sec": 0, 00:14:38.444 "rw_mbytes_per_sec": 0, 00:14:38.444 "r_mbytes_per_sec": 0, 00:14:38.444 "w_mbytes_per_sec": 0 00:14:38.444 }, 00:14:38.444 "claimed": true, 00:14:38.444 "claim_type": "exclusive_write", 00:14:38.444 "zoned": false, 00:14:38.444 "supported_io_types": { 00:14:38.444 "read": true, 00:14:38.444 "write": true, 00:14:38.444 "unmap": true, 00:14:38.444 "flush": true, 00:14:38.444 "reset": true, 00:14:38.444 "nvme_admin": false, 00:14:38.444 "nvme_io": false, 00:14:38.444 "nvme_io_md": false, 00:14:38.444 "write_zeroes": true, 00:14:38.444 "zcopy": true, 00:14:38.444 "get_zone_info": false, 00:14:38.444 "zone_management": false, 00:14:38.444 "zone_append": false, 00:14:38.444 "compare": false, 00:14:38.444 "compare_and_write": false, 00:14:38.444 "abort": true, 00:14:38.444 "seek_hole": false, 00:14:38.444 "seek_data": false, 00:14:38.444 "copy": true, 00:14:38.444 "nvme_iov_md": false 00:14:38.444 }, 00:14:38.444 "memory_domains": [ 00:14:38.444 { 00:14:38.444 "dma_device_id": "system", 00:14:38.444 "dma_device_type": 1 00:14:38.444 }, 00:14:38.444 { 00:14:38.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.444 "dma_device_type": 2 00:14:38.444 } 00:14:38.444 ], 00:14:38.444 "driver_specific": {} 00:14:38.444 } 00:14:38.444 ] 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.444 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.444 "name": "Existed_Raid", 00:14:38.444 "uuid": "973757fe-851e-42fd-b6d9-5ebb2298ff01", 00:14:38.444 "strip_size_kb": 0, 00:14:38.444 "state": "online", 00:14:38.444 "raid_level": "raid1", 00:14:38.444 "superblock": false, 00:14:38.444 "num_base_bdevs": 4, 00:14:38.444 "num_base_bdevs_discovered": 4, 00:14:38.445 "num_base_bdevs_operational": 4, 00:14:38.445 "base_bdevs_list": [ 00:14:38.445 { 00:14:38.445 "name": "BaseBdev1", 00:14:38.445 "uuid": "77f71106-f6bb-4aa1-9c8b-32f001d45e9c", 00:14:38.445 "is_configured": true, 00:14:38.445 "data_offset": 0, 00:14:38.445 "data_size": 65536 00:14:38.445 }, 00:14:38.445 { 00:14:38.445 "name": "BaseBdev2", 00:14:38.445 "uuid": "d1ef4448-e49f-41c4-8ab8-1cae417bef24", 00:14:38.445 "is_configured": true, 00:14:38.445 "data_offset": 0, 00:14:38.445 "data_size": 65536 00:14:38.445 }, 00:14:38.445 { 00:14:38.445 "name": "BaseBdev3", 00:14:38.445 "uuid": "9d6a2561-79f9-41b3-89ea-81bed155907e", 00:14:38.445 "is_configured": true, 00:14:38.445 "data_offset": 0, 00:14:38.445 "data_size": 65536 00:14:38.445 }, 00:14:38.445 { 00:14:38.445 "name": "BaseBdev4", 00:14:38.445 "uuid": "4ebb10d7-9403-4dbd-b37e-958d124126e5", 00:14:38.445 "is_configured": true, 00:14:38.445 "data_offset": 0, 00:14:38.445 "data_size": 65536 00:14:38.445 } 00:14:38.445 ] 00:14:38.445 }' 00:14:38.445 15:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.445 15:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.011 [2024-12-06 15:41:22.043004] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.011 "name": "Existed_Raid", 00:14:39.011 "aliases": [ 00:14:39.011 "973757fe-851e-42fd-b6d9-5ebb2298ff01" 00:14:39.011 ], 00:14:39.011 "product_name": "Raid Volume", 00:14:39.011 "block_size": 512, 00:14:39.011 "num_blocks": 65536, 00:14:39.011 "uuid": "973757fe-851e-42fd-b6d9-5ebb2298ff01", 00:14:39.011 "assigned_rate_limits": { 00:14:39.011 "rw_ios_per_sec": 0, 00:14:39.011 "rw_mbytes_per_sec": 0, 00:14:39.011 "r_mbytes_per_sec": 0, 00:14:39.011 "w_mbytes_per_sec": 0 00:14:39.011 }, 00:14:39.011 "claimed": false, 00:14:39.011 "zoned": false, 00:14:39.011 "supported_io_types": { 00:14:39.011 "read": true, 00:14:39.011 "write": true, 00:14:39.011 "unmap": false, 00:14:39.011 "flush": false, 00:14:39.011 "reset": true, 00:14:39.011 "nvme_admin": false, 00:14:39.011 "nvme_io": false, 00:14:39.011 "nvme_io_md": false, 00:14:39.011 "write_zeroes": true, 00:14:39.011 "zcopy": false, 00:14:39.011 "get_zone_info": false, 00:14:39.011 "zone_management": false, 00:14:39.011 "zone_append": false, 00:14:39.011 "compare": false, 00:14:39.011 "compare_and_write": false, 00:14:39.011 "abort": false, 00:14:39.011 "seek_hole": false, 00:14:39.011 "seek_data": false, 00:14:39.011 "copy": false, 00:14:39.011 "nvme_iov_md": false 00:14:39.011 }, 00:14:39.011 "memory_domains": [ 00:14:39.011 { 00:14:39.011 "dma_device_id": "system", 00:14:39.011 "dma_device_type": 1 00:14:39.011 }, 00:14:39.011 { 00:14:39.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.011 "dma_device_type": 2 00:14:39.011 }, 00:14:39.011 { 00:14:39.011 "dma_device_id": "system", 00:14:39.011 "dma_device_type": 1 00:14:39.011 }, 00:14:39.011 { 00:14:39.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.011 "dma_device_type": 2 00:14:39.011 }, 00:14:39.011 { 00:14:39.011 "dma_device_id": "system", 00:14:39.011 "dma_device_type": 1 00:14:39.011 }, 00:14:39.011 { 00:14:39.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.011 "dma_device_type": 2 00:14:39.011 }, 00:14:39.011 { 00:14:39.011 "dma_device_id": "system", 00:14:39.011 "dma_device_type": 1 00:14:39.011 }, 00:14:39.011 { 00:14:39.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.011 "dma_device_type": 2 00:14:39.011 } 00:14:39.011 ], 00:14:39.011 "driver_specific": { 00:14:39.011 "raid": { 00:14:39.011 "uuid": "973757fe-851e-42fd-b6d9-5ebb2298ff01", 00:14:39.011 "strip_size_kb": 0, 00:14:39.011 "state": "online", 00:14:39.011 "raid_level": "raid1", 00:14:39.011 "superblock": false, 00:14:39.011 "num_base_bdevs": 4, 00:14:39.011 "num_base_bdevs_discovered": 4, 00:14:39.011 "num_base_bdevs_operational": 4, 00:14:39.011 "base_bdevs_list": [ 00:14:39.011 { 00:14:39.011 "name": "BaseBdev1", 00:14:39.011 "uuid": "77f71106-f6bb-4aa1-9c8b-32f001d45e9c", 00:14:39.011 "is_configured": true, 00:14:39.011 "data_offset": 0, 00:14:39.011 "data_size": 65536 00:14:39.011 }, 00:14:39.011 { 00:14:39.011 "name": "BaseBdev2", 00:14:39.011 "uuid": "d1ef4448-e49f-41c4-8ab8-1cae417bef24", 00:14:39.011 "is_configured": true, 00:14:39.011 "data_offset": 0, 00:14:39.011 "data_size": 65536 00:14:39.011 }, 00:14:39.011 { 00:14:39.011 "name": "BaseBdev3", 00:14:39.011 "uuid": "9d6a2561-79f9-41b3-89ea-81bed155907e", 00:14:39.011 "is_configured": true, 00:14:39.011 "data_offset": 0, 00:14:39.011 "data_size": 65536 00:14:39.011 }, 00:14:39.011 { 00:14:39.011 "name": "BaseBdev4", 00:14:39.011 "uuid": "4ebb10d7-9403-4dbd-b37e-958d124126e5", 00:14:39.011 "is_configured": true, 00:14:39.011 "data_offset": 0, 00:14:39.011 "data_size": 65536 00:14:39.011 } 00:14:39.011 ] 00:14:39.011 } 00:14:39.011 } 00:14:39.011 }' 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:39.011 BaseBdev2 00:14:39.011 BaseBdev3 00:14:39.011 BaseBdev4' 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.011 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.270 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.270 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.270 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.270 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.270 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:39.270 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.270 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.270 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.270 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.270 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.271 [2024-12-06 15:41:22.374700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.271 "name": "Existed_Raid", 00:14:39.271 "uuid": "973757fe-851e-42fd-b6d9-5ebb2298ff01", 00:14:39.271 "strip_size_kb": 0, 00:14:39.271 "state": "online", 00:14:39.271 "raid_level": "raid1", 00:14:39.271 "superblock": false, 00:14:39.271 "num_base_bdevs": 4, 00:14:39.271 "num_base_bdevs_discovered": 3, 00:14:39.271 "num_base_bdevs_operational": 3, 00:14:39.271 "base_bdevs_list": [ 00:14:39.271 { 00:14:39.271 "name": null, 00:14:39.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.271 "is_configured": false, 00:14:39.271 "data_offset": 0, 00:14:39.271 "data_size": 65536 00:14:39.271 }, 00:14:39.271 { 00:14:39.271 "name": "BaseBdev2", 00:14:39.271 "uuid": "d1ef4448-e49f-41c4-8ab8-1cae417bef24", 00:14:39.271 "is_configured": true, 00:14:39.271 "data_offset": 0, 00:14:39.271 "data_size": 65536 00:14:39.271 }, 00:14:39.271 { 00:14:39.271 "name": "BaseBdev3", 00:14:39.271 "uuid": "9d6a2561-79f9-41b3-89ea-81bed155907e", 00:14:39.271 "is_configured": true, 00:14:39.271 "data_offset": 0, 00:14:39.271 "data_size": 65536 00:14:39.271 }, 00:14:39.271 { 00:14:39.271 "name": "BaseBdev4", 00:14:39.271 "uuid": "4ebb10d7-9403-4dbd-b37e-958d124126e5", 00:14:39.271 "is_configured": true, 00:14:39.271 "data_offset": 0, 00:14:39.271 "data_size": 65536 00:14:39.271 } 00:14:39.271 ] 00:14:39.271 }' 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.271 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.837 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:39.837 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:39.837 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.837 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:39.837 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.837 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.837 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.837 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:39.837 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:39.837 15:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:39.837 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.837 15:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.837 [2024-12-06 15:41:22.957445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:39.837 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.837 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:39.837 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:39.837 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.837 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:39.837 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.837 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.837 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.837 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:39.837 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:39.837 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:39.837 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.837 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.837 [2024-12-06 15:41:23.111407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.094 [2024-12-06 15:41:23.270774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:40.094 [2024-12-06 15:41:23.270943] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.094 [2024-12-06 15:41:23.381746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.094 [2024-12-06 15:41:23.382127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.094 [2024-12-06 15:41:23.382181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.094 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.352 BaseBdev2 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.352 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.353 [ 00:14:40.353 { 00:14:40.353 "name": "BaseBdev2", 00:14:40.353 "aliases": [ 00:14:40.353 "72c9e288-ac30-45eb-92cd-b05cfdbcab7e" 00:14:40.353 ], 00:14:40.353 "product_name": "Malloc disk", 00:14:40.353 "block_size": 512, 00:14:40.353 "num_blocks": 65536, 00:14:40.353 "uuid": "72c9e288-ac30-45eb-92cd-b05cfdbcab7e", 00:14:40.353 "assigned_rate_limits": { 00:14:40.353 "rw_ios_per_sec": 0, 00:14:40.353 "rw_mbytes_per_sec": 0, 00:14:40.353 "r_mbytes_per_sec": 0, 00:14:40.353 "w_mbytes_per_sec": 0 00:14:40.353 }, 00:14:40.353 "claimed": false, 00:14:40.353 "zoned": false, 00:14:40.353 "supported_io_types": { 00:14:40.353 "read": true, 00:14:40.353 "write": true, 00:14:40.353 "unmap": true, 00:14:40.353 "flush": true, 00:14:40.353 "reset": true, 00:14:40.353 "nvme_admin": false, 00:14:40.353 "nvme_io": false, 00:14:40.353 "nvme_io_md": false, 00:14:40.353 "write_zeroes": true, 00:14:40.353 "zcopy": true, 00:14:40.353 "get_zone_info": false, 00:14:40.353 "zone_management": false, 00:14:40.353 "zone_append": false, 00:14:40.353 "compare": false, 00:14:40.353 "compare_and_write": false, 00:14:40.353 "abort": true, 00:14:40.353 "seek_hole": false, 00:14:40.353 "seek_data": false, 00:14:40.353 "copy": true, 00:14:40.353 "nvme_iov_md": false 00:14:40.353 }, 00:14:40.353 "memory_domains": [ 00:14:40.353 { 00:14:40.353 "dma_device_id": "system", 00:14:40.353 "dma_device_type": 1 00:14:40.353 }, 00:14:40.353 { 00:14:40.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.353 "dma_device_type": 2 00:14:40.353 } 00:14:40.353 ], 00:14:40.353 "driver_specific": {} 00:14:40.353 } 00:14:40.353 ] 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.353 BaseBdev3 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.353 [ 00:14:40.353 { 00:14:40.353 "name": "BaseBdev3", 00:14:40.353 "aliases": [ 00:14:40.353 "1d5a4999-a938-4121-8234-de505dba7f01" 00:14:40.353 ], 00:14:40.353 "product_name": "Malloc disk", 00:14:40.353 "block_size": 512, 00:14:40.353 "num_blocks": 65536, 00:14:40.353 "uuid": "1d5a4999-a938-4121-8234-de505dba7f01", 00:14:40.353 "assigned_rate_limits": { 00:14:40.353 "rw_ios_per_sec": 0, 00:14:40.353 "rw_mbytes_per_sec": 0, 00:14:40.353 "r_mbytes_per_sec": 0, 00:14:40.353 "w_mbytes_per_sec": 0 00:14:40.353 }, 00:14:40.353 "claimed": false, 00:14:40.353 "zoned": false, 00:14:40.353 "supported_io_types": { 00:14:40.353 "read": true, 00:14:40.353 "write": true, 00:14:40.353 "unmap": true, 00:14:40.353 "flush": true, 00:14:40.353 "reset": true, 00:14:40.353 "nvme_admin": false, 00:14:40.353 "nvme_io": false, 00:14:40.353 "nvme_io_md": false, 00:14:40.353 "write_zeroes": true, 00:14:40.353 "zcopy": true, 00:14:40.353 "get_zone_info": false, 00:14:40.353 "zone_management": false, 00:14:40.353 "zone_append": false, 00:14:40.353 "compare": false, 00:14:40.353 "compare_and_write": false, 00:14:40.353 "abort": true, 00:14:40.353 "seek_hole": false, 00:14:40.353 "seek_data": false, 00:14:40.353 "copy": true, 00:14:40.353 "nvme_iov_md": false 00:14:40.353 }, 00:14:40.353 "memory_domains": [ 00:14:40.353 { 00:14:40.353 "dma_device_id": "system", 00:14:40.353 "dma_device_type": 1 00:14:40.353 }, 00:14:40.353 { 00:14:40.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.353 "dma_device_type": 2 00:14:40.353 } 00:14:40.353 ], 00:14:40.353 "driver_specific": {} 00:14:40.353 } 00:14:40.353 ] 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.353 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.612 BaseBdev4 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.612 [ 00:14:40.612 { 00:14:40.612 "name": "BaseBdev4", 00:14:40.612 "aliases": [ 00:14:40.612 "7b16bbfa-9f3f-4a2f-8fda-2d94375c6038" 00:14:40.612 ], 00:14:40.612 "product_name": "Malloc disk", 00:14:40.612 "block_size": 512, 00:14:40.612 "num_blocks": 65536, 00:14:40.612 "uuid": "7b16bbfa-9f3f-4a2f-8fda-2d94375c6038", 00:14:40.612 "assigned_rate_limits": { 00:14:40.612 "rw_ios_per_sec": 0, 00:14:40.612 "rw_mbytes_per_sec": 0, 00:14:40.612 "r_mbytes_per_sec": 0, 00:14:40.612 "w_mbytes_per_sec": 0 00:14:40.612 }, 00:14:40.612 "claimed": false, 00:14:40.612 "zoned": false, 00:14:40.612 "supported_io_types": { 00:14:40.612 "read": true, 00:14:40.612 "write": true, 00:14:40.612 "unmap": true, 00:14:40.612 "flush": true, 00:14:40.612 "reset": true, 00:14:40.612 "nvme_admin": false, 00:14:40.612 "nvme_io": false, 00:14:40.612 "nvme_io_md": false, 00:14:40.612 "write_zeroes": true, 00:14:40.612 "zcopy": true, 00:14:40.612 "get_zone_info": false, 00:14:40.612 "zone_management": false, 00:14:40.612 "zone_append": false, 00:14:40.612 "compare": false, 00:14:40.612 "compare_and_write": false, 00:14:40.612 "abort": true, 00:14:40.612 "seek_hole": false, 00:14:40.612 "seek_data": false, 00:14:40.612 "copy": true, 00:14:40.612 "nvme_iov_md": false 00:14:40.612 }, 00:14:40.612 "memory_domains": [ 00:14:40.612 { 00:14:40.612 "dma_device_id": "system", 00:14:40.612 "dma_device_type": 1 00:14:40.612 }, 00:14:40.612 { 00:14:40.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.612 "dma_device_type": 2 00:14:40.612 } 00:14:40.612 ], 00:14:40.612 "driver_specific": {} 00:14:40.612 } 00:14:40.612 ] 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.612 [2024-12-06 15:41:23.707807] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.612 [2024-12-06 15:41:23.708058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.612 [2024-12-06 15:41:23.708196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.612 [2024-12-06 15:41:23.711208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.612 [2024-12-06 15:41:23.711437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.612 "name": "Existed_Raid", 00:14:40.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.612 "strip_size_kb": 0, 00:14:40.612 "state": "configuring", 00:14:40.612 "raid_level": "raid1", 00:14:40.612 "superblock": false, 00:14:40.612 "num_base_bdevs": 4, 00:14:40.612 "num_base_bdevs_discovered": 3, 00:14:40.612 "num_base_bdevs_operational": 4, 00:14:40.612 "base_bdevs_list": [ 00:14:40.612 { 00:14:40.612 "name": "BaseBdev1", 00:14:40.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.612 "is_configured": false, 00:14:40.612 "data_offset": 0, 00:14:40.612 "data_size": 0 00:14:40.612 }, 00:14:40.612 { 00:14:40.612 "name": "BaseBdev2", 00:14:40.612 "uuid": "72c9e288-ac30-45eb-92cd-b05cfdbcab7e", 00:14:40.612 "is_configured": true, 00:14:40.612 "data_offset": 0, 00:14:40.612 "data_size": 65536 00:14:40.612 }, 00:14:40.612 { 00:14:40.612 "name": "BaseBdev3", 00:14:40.612 "uuid": "1d5a4999-a938-4121-8234-de505dba7f01", 00:14:40.612 "is_configured": true, 00:14:40.612 "data_offset": 0, 00:14:40.612 "data_size": 65536 00:14:40.612 }, 00:14:40.612 { 00:14:40.612 "name": "BaseBdev4", 00:14:40.612 "uuid": "7b16bbfa-9f3f-4a2f-8fda-2d94375c6038", 00:14:40.612 "is_configured": true, 00:14:40.612 "data_offset": 0, 00:14:40.612 "data_size": 65536 00:14:40.612 } 00:14:40.612 ] 00:14:40.612 }' 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.612 15:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.871 [2024-12-06 15:41:24.107765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.871 "name": "Existed_Raid", 00:14:40.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.871 "strip_size_kb": 0, 00:14:40.871 "state": "configuring", 00:14:40.871 "raid_level": "raid1", 00:14:40.871 "superblock": false, 00:14:40.871 "num_base_bdevs": 4, 00:14:40.871 "num_base_bdevs_discovered": 2, 00:14:40.871 "num_base_bdevs_operational": 4, 00:14:40.871 "base_bdevs_list": [ 00:14:40.871 { 00:14:40.871 "name": "BaseBdev1", 00:14:40.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.871 "is_configured": false, 00:14:40.871 "data_offset": 0, 00:14:40.871 "data_size": 0 00:14:40.871 }, 00:14:40.871 { 00:14:40.871 "name": null, 00:14:40.871 "uuid": "72c9e288-ac30-45eb-92cd-b05cfdbcab7e", 00:14:40.871 "is_configured": false, 00:14:40.871 "data_offset": 0, 00:14:40.871 "data_size": 65536 00:14:40.871 }, 00:14:40.871 { 00:14:40.871 "name": "BaseBdev3", 00:14:40.871 "uuid": "1d5a4999-a938-4121-8234-de505dba7f01", 00:14:40.871 "is_configured": true, 00:14:40.871 "data_offset": 0, 00:14:40.871 "data_size": 65536 00:14:40.871 }, 00:14:40.871 { 00:14:40.871 "name": "BaseBdev4", 00:14:40.871 "uuid": "7b16bbfa-9f3f-4a2f-8fda-2d94375c6038", 00:14:40.871 "is_configured": true, 00:14:40.871 "data_offset": 0, 00:14:40.871 "data_size": 65536 00:14:40.871 } 00:14:40.871 ] 00:14:40.871 }' 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.871 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.437 [2024-12-06 15:41:24.583470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.437 BaseBdev1 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.437 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.437 [ 00:14:41.437 { 00:14:41.437 "name": "BaseBdev1", 00:14:41.437 "aliases": [ 00:14:41.437 "5f2ebc81-b4df-4da7-bd04-21d2250f7689" 00:14:41.437 ], 00:14:41.437 "product_name": "Malloc disk", 00:14:41.437 "block_size": 512, 00:14:41.437 "num_blocks": 65536, 00:14:41.437 "uuid": "5f2ebc81-b4df-4da7-bd04-21d2250f7689", 00:14:41.437 "assigned_rate_limits": { 00:14:41.437 "rw_ios_per_sec": 0, 00:14:41.437 "rw_mbytes_per_sec": 0, 00:14:41.437 "r_mbytes_per_sec": 0, 00:14:41.437 "w_mbytes_per_sec": 0 00:14:41.437 }, 00:14:41.437 "claimed": true, 00:14:41.437 "claim_type": "exclusive_write", 00:14:41.437 "zoned": false, 00:14:41.438 "supported_io_types": { 00:14:41.438 "read": true, 00:14:41.438 "write": true, 00:14:41.438 "unmap": true, 00:14:41.438 "flush": true, 00:14:41.438 "reset": true, 00:14:41.438 "nvme_admin": false, 00:14:41.438 "nvme_io": false, 00:14:41.438 "nvme_io_md": false, 00:14:41.438 "write_zeroes": true, 00:14:41.438 "zcopy": true, 00:14:41.438 "get_zone_info": false, 00:14:41.438 "zone_management": false, 00:14:41.438 "zone_append": false, 00:14:41.438 "compare": false, 00:14:41.438 "compare_and_write": false, 00:14:41.438 "abort": true, 00:14:41.438 "seek_hole": false, 00:14:41.438 "seek_data": false, 00:14:41.438 "copy": true, 00:14:41.438 "nvme_iov_md": false 00:14:41.438 }, 00:14:41.438 "memory_domains": [ 00:14:41.438 { 00:14:41.438 "dma_device_id": "system", 00:14:41.438 "dma_device_type": 1 00:14:41.438 }, 00:14:41.438 { 00:14:41.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.438 "dma_device_type": 2 00:14:41.438 } 00:14:41.438 ], 00:14:41.438 "driver_specific": {} 00:14:41.438 } 00:14:41.438 ] 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.438 "name": "Existed_Raid", 00:14:41.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.438 "strip_size_kb": 0, 00:14:41.438 "state": "configuring", 00:14:41.438 "raid_level": "raid1", 00:14:41.438 "superblock": false, 00:14:41.438 "num_base_bdevs": 4, 00:14:41.438 "num_base_bdevs_discovered": 3, 00:14:41.438 "num_base_bdevs_operational": 4, 00:14:41.438 "base_bdevs_list": [ 00:14:41.438 { 00:14:41.438 "name": "BaseBdev1", 00:14:41.438 "uuid": "5f2ebc81-b4df-4da7-bd04-21d2250f7689", 00:14:41.438 "is_configured": true, 00:14:41.438 "data_offset": 0, 00:14:41.438 "data_size": 65536 00:14:41.438 }, 00:14:41.438 { 00:14:41.438 "name": null, 00:14:41.438 "uuid": "72c9e288-ac30-45eb-92cd-b05cfdbcab7e", 00:14:41.438 "is_configured": false, 00:14:41.438 "data_offset": 0, 00:14:41.438 "data_size": 65536 00:14:41.438 }, 00:14:41.438 { 00:14:41.438 "name": "BaseBdev3", 00:14:41.438 "uuid": "1d5a4999-a938-4121-8234-de505dba7f01", 00:14:41.438 "is_configured": true, 00:14:41.438 "data_offset": 0, 00:14:41.438 "data_size": 65536 00:14:41.438 }, 00:14:41.438 { 00:14:41.438 "name": "BaseBdev4", 00:14:41.438 "uuid": "7b16bbfa-9f3f-4a2f-8fda-2d94375c6038", 00:14:41.438 "is_configured": true, 00:14:41.438 "data_offset": 0, 00:14:41.438 "data_size": 65536 00:14:41.438 } 00:14:41.438 ] 00:14:41.438 }' 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.438 15:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.010 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.011 [2024-12-06 15:41:25.082943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.011 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.012 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.012 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.012 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.012 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.012 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.012 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.012 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.012 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.012 "name": "Existed_Raid", 00:14:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.012 "strip_size_kb": 0, 00:14:42.012 "state": "configuring", 00:14:42.012 "raid_level": "raid1", 00:14:42.012 "superblock": false, 00:14:42.012 "num_base_bdevs": 4, 00:14:42.012 "num_base_bdevs_discovered": 2, 00:14:42.012 "num_base_bdevs_operational": 4, 00:14:42.012 "base_bdevs_list": [ 00:14:42.012 { 00:14:42.012 "name": "BaseBdev1", 00:14:42.012 "uuid": "5f2ebc81-b4df-4da7-bd04-21d2250f7689", 00:14:42.012 "is_configured": true, 00:14:42.012 "data_offset": 0, 00:14:42.012 "data_size": 65536 00:14:42.012 }, 00:14:42.012 { 00:14:42.012 "name": null, 00:14:42.012 "uuid": "72c9e288-ac30-45eb-92cd-b05cfdbcab7e", 00:14:42.012 "is_configured": false, 00:14:42.012 "data_offset": 0, 00:14:42.012 "data_size": 65536 00:14:42.012 }, 00:14:42.012 { 00:14:42.012 "name": null, 00:14:42.012 "uuid": "1d5a4999-a938-4121-8234-de505dba7f01", 00:14:42.013 "is_configured": false, 00:14:42.013 "data_offset": 0, 00:14:42.013 "data_size": 65536 00:14:42.013 }, 00:14:42.013 { 00:14:42.013 "name": "BaseBdev4", 00:14:42.013 "uuid": "7b16bbfa-9f3f-4a2f-8fda-2d94375c6038", 00:14:42.013 "is_configured": true, 00:14:42.013 "data_offset": 0, 00:14:42.013 "data_size": 65536 00:14:42.013 } 00:14:42.013 ] 00:14:42.013 }' 00:14:42.013 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.013 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.295 [2024-12-06 15:41:25.502321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.295 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.295 "name": "Existed_Raid", 00:14:42.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.295 "strip_size_kb": 0, 00:14:42.295 "state": "configuring", 00:14:42.295 "raid_level": "raid1", 00:14:42.295 "superblock": false, 00:14:42.295 "num_base_bdevs": 4, 00:14:42.295 "num_base_bdevs_discovered": 3, 00:14:42.295 "num_base_bdevs_operational": 4, 00:14:42.295 "base_bdevs_list": [ 00:14:42.295 { 00:14:42.295 "name": "BaseBdev1", 00:14:42.295 "uuid": "5f2ebc81-b4df-4da7-bd04-21d2250f7689", 00:14:42.295 "is_configured": true, 00:14:42.295 "data_offset": 0, 00:14:42.295 "data_size": 65536 00:14:42.295 }, 00:14:42.295 { 00:14:42.295 "name": null, 00:14:42.295 "uuid": "72c9e288-ac30-45eb-92cd-b05cfdbcab7e", 00:14:42.295 "is_configured": false, 00:14:42.295 "data_offset": 0, 00:14:42.295 "data_size": 65536 00:14:42.295 }, 00:14:42.295 { 00:14:42.295 "name": "BaseBdev3", 00:14:42.295 "uuid": "1d5a4999-a938-4121-8234-de505dba7f01", 00:14:42.295 "is_configured": true, 00:14:42.295 "data_offset": 0, 00:14:42.296 "data_size": 65536 00:14:42.296 }, 00:14:42.296 { 00:14:42.296 "name": "BaseBdev4", 00:14:42.296 "uuid": "7b16bbfa-9f3f-4a2f-8fda-2d94375c6038", 00:14:42.296 "is_configured": true, 00:14:42.296 "data_offset": 0, 00:14:42.296 "data_size": 65536 00:14:42.296 } 00:14:42.296 ] 00:14:42.296 }' 00:14:42.296 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.296 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.863 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.863 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.863 15:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:42.863 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.863 15:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.863 [2024-12-06 15:41:26.018083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.863 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.122 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.122 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.122 "name": "Existed_Raid", 00:14:43.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.122 "strip_size_kb": 0, 00:14:43.122 "state": "configuring", 00:14:43.122 "raid_level": "raid1", 00:14:43.122 "superblock": false, 00:14:43.122 "num_base_bdevs": 4, 00:14:43.122 "num_base_bdevs_discovered": 2, 00:14:43.122 "num_base_bdevs_operational": 4, 00:14:43.122 "base_bdevs_list": [ 00:14:43.122 { 00:14:43.122 "name": null, 00:14:43.122 "uuid": "5f2ebc81-b4df-4da7-bd04-21d2250f7689", 00:14:43.122 "is_configured": false, 00:14:43.122 "data_offset": 0, 00:14:43.122 "data_size": 65536 00:14:43.122 }, 00:14:43.122 { 00:14:43.122 "name": null, 00:14:43.122 "uuid": "72c9e288-ac30-45eb-92cd-b05cfdbcab7e", 00:14:43.122 "is_configured": false, 00:14:43.122 "data_offset": 0, 00:14:43.122 "data_size": 65536 00:14:43.122 }, 00:14:43.122 { 00:14:43.122 "name": "BaseBdev3", 00:14:43.122 "uuid": "1d5a4999-a938-4121-8234-de505dba7f01", 00:14:43.122 "is_configured": true, 00:14:43.122 "data_offset": 0, 00:14:43.122 "data_size": 65536 00:14:43.122 }, 00:14:43.122 { 00:14:43.122 "name": "BaseBdev4", 00:14:43.122 "uuid": "7b16bbfa-9f3f-4a2f-8fda-2d94375c6038", 00:14:43.122 "is_configured": true, 00:14:43.122 "data_offset": 0, 00:14:43.122 "data_size": 65536 00:14:43.122 } 00:14:43.122 ] 00:14:43.122 }' 00:14:43.122 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.122 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.381 [2024-12-06 15:41:26.611767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.381 "name": "Existed_Raid", 00:14:43.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.381 "strip_size_kb": 0, 00:14:43.381 "state": "configuring", 00:14:43.381 "raid_level": "raid1", 00:14:43.381 "superblock": false, 00:14:43.381 "num_base_bdevs": 4, 00:14:43.381 "num_base_bdevs_discovered": 3, 00:14:43.381 "num_base_bdevs_operational": 4, 00:14:43.381 "base_bdevs_list": [ 00:14:43.381 { 00:14:43.381 "name": null, 00:14:43.381 "uuid": "5f2ebc81-b4df-4da7-bd04-21d2250f7689", 00:14:43.381 "is_configured": false, 00:14:43.381 "data_offset": 0, 00:14:43.381 "data_size": 65536 00:14:43.381 }, 00:14:43.381 { 00:14:43.381 "name": "BaseBdev2", 00:14:43.381 "uuid": "72c9e288-ac30-45eb-92cd-b05cfdbcab7e", 00:14:43.381 "is_configured": true, 00:14:43.381 "data_offset": 0, 00:14:43.381 "data_size": 65536 00:14:43.381 }, 00:14:43.381 { 00:14:43.381 "name": "BaseBdev3", 00:14:43.381 "uuid": "1d5a4999-a938-4121-8234-de505dba7f01", 00:14:43.381 "is_configured": true, 00:14:43.381 "data_offset": 0, 00:14:43.381 "data_size": 65536 00:14:43.381 }, 00:14:43.381 { 00:14:43.381 "name": "BaseBdev4", 00:14:43.381 "uuid": "7b16bbfa-9f3f-4a2f-8fda-2d94375c6038", 00:14:43.381 "is_configured": true, 00:14:43.381 "data_offset": 0, 00:14:43.381 "data_size": 65536 00:14:43.381 } 00:14:43.381 ] 00:14:43.381 }' 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.381 15:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5f2ebc81-b4df-4da7-bd04-21d2250f7689 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.949 [2024-12-06 15:41:27.144569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:43.949 [2024-12-06 15:41:27.144620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:43.949 [2024-12-06 15:41:27.144633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:43.949 [2024-12-06 15:41:27.144956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:43.949 [2024-12-06 15:41:27.145135] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:43.949 [2024-12-06 15:41:27.145152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:43.949 [2024-12-06 15:41:27.145431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.949 NewBaseBdev 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.949 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.949 [ 00:14:43.949 { 00:14:43.949 "name": "NewBaseBdev", 00:14:43.949 "aliases": [ 00:14:43.949 "5f2ebc81-b4df-4da7-bd04-21d2250f7689" 00:14:43.949 ], 00:14:43.949 "product_name": "Malloc disk", 00:14:43.949 "block_size": 512, 00:14:43.949 "num_blocks": 65536, 00:14:43.949 "uuid": "5f2ebc81-b4df-4da7-bd04-21d2250f7689", 00:14:43.949 "assigned_rate_limits": { 00:14:43.949 "rw_ios_per_sec": 0, 00:14:43.949 "rw_mbytes_per_sec": 0, 00:14:43.949 "r_mbytes_per_sec": 0, 00:14:43.949 "w_mbytes_per_sec": 0 00:14:43.949 }, 00:14:43.949 "claimed": true, 00:14:43.949 "claim_type": "exclusive_write", 00:14:43.949 "zoned": false, 00:14:43.949 "supported_io_types": { 00:14:43.949 "read": true, 00:14:43.949 "write": true, 00:14:43.949 "unmap": true, 00:14:43.949 "flush": true, 00:14:43.949 "reset": true, 00:14:43.949 "nvme_admin": false, 00:14:43.949 "nvme_io": false, 00:14:43.949 "nvme_io_md": false, 00:14:43.949 "write_zeroes": true, 00:14:43.950 "zcopy": true, 00:14:43.950 "get_zone_info": false, 00:14:43.950 "zone_management": false, 00:14:43.950 "zone_append": false, 00:14:43.950 "compare": false, 00:14:43.950 "compare_and_write": false, 00:14:43.950 "abort": true, 00:14:43.950 "seek_hole": false, 00:14:43.950 "seek_data": false, 00:14:43.950 "copy": true, 00:14:43.950 "nvme_iov_md": false 00:14:43.950 }, 00:14:43.950 "memory_domains": [ 00:14:43.950 { 00:14:43.950 "dma_device_id": "system", 00:14:43.950 "dma_device_type": 1 00:14:43.950 }, 00:14:43.950 { 00:14:43.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.950 "dma_device_type": 2 00:14:43.950 } 00:14:43.950 ], 00:14:43.950 "driver_specific": {} 00:14:43.950 } 00:14:43.950 ] 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.950 "name": "Existed_Raid", 00:14:43.950 "uuid": "87818297-b2f5-465d-8a42-211ccd8ce16c", 00:14:43.950 "strip_size_kb": 0, 00:14:43.950 "state": "online", 00:14:43.950 "raid_level": "raid1", 00:14:43.950 "superblock": false, 00:14:43.950 "num_base_bdevs": 4, 00:14:43.950 "num_base_bdevs_discovered": 4, 00:14:43.950 "num_base_bdevs_operational": 4, 00:14:43.950 "base_bdevs_list": [ 00:14:43.950 { 00:14:43.950 "name": "NewBaseBdev", 00:14:43.950 "uuid": "5f2ebc81-b4df-4da7-bd04-21d2250f7689", 00:14:43.950 "is_configured": true, 00:14:43.950 "data_offset": 0, 00:14:43.950 "data_size": 65536 00:14:43.950 }, 00:14:43.950 { 00:14:43.950 "name": "BaseBdev2", 00:14:43.950 "uuid": "72c9e288-ac30-45eb-92cd-b05cfdbcab7e", 00:14:43.950 "is_configured": true, 00:14:43.950 "data_offset": 0, 00:14:43.950 "data_size": 65536 00:14:43.950 }, 00:14:43.950 { 00:14:43.950 "name": "BaseBdev3", 00:14:43.950 "uuid": "1d5a4999-a938-4121-8234-de505dba7f01", 00:14:43.950 "is_configured": true, 00:14:43.950 "data_offset": 0, 00:14:43.950 "data_size": 65536 00:14:43.950 }, 00:14:43.950 { 00:14:43.950 "name": "BaseBdev4", 00:14:43.950 "uuid": "7b16bbfa-9f3f-4a2f-8fda-2d94375c6038", 00:14:43.950 "is_configured": true, 00:14:43.950 "data_offset": 0, 00:14:43.950 "data_size": 65536 00:14:43.950 } 00:14:43.950 ] 00:14:43.950 }' 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.950 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.519 [2024-12-06 15:41:27.600250] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:44.519 "name": "Existed_Raid", 00:14:44.519 "aliases": [ 00:14:44.519 "87818297-b2f5-465d-8a42-211ccd8ce16c" 00:14:44.519 ], 00:14:44.519 "product_name": "Raid Volume", 00:14:44.519 "block_size": 512, 00:14:44.519 "num_blocks": 65536, 00:14:44.519 "uuid": "87818297-b2f5-465d-8a42-211ccd8ce16c", 00:14:44.519 "assigned_rate_limits": { 00:14:44.519 "rw_ios_per_sec": 0, 00:14:44.519 "rw_mbytes_per_sec": 0, 00:14:44.519 "r_mbytes_per_sec": 0, 00:14:44.519 "w_mbytes_per_sec": 0 00:14:44.519 }, 00:14:44.519 "claimed": false, 00:14:44.519 "zoned": false, 00:14:44.519 "supported_io_types": { 00:14:44.519 "read": true, 00:14:44.519 "write": true, 00:14:44.519 "unmap": false, 00:14:44.519 "flush": false, 00:14:44.519 "reset": true, 00:14:44.519 "nvme_admin": false, 00:14:44.519 "nvme_io": false, 00:14:44.519 "nvme_io_md": false, 00:14:44.519 "write_zeroes": true, 00:14:44.519 "zcopy": false, 00:14:44.519 "get_zone_info": false, 00:14:44.519 "zone_management": false, 00:14:44.519 "zone_append": false, 00:14:44.519 "compare": false, 00:14:44.519 "compare_and_write": false, 00:14:44.519 "abort": false, 00:14:44.519 "seek_hole": false, 00:14:44.519 "seek_data": false, 00:14:44.519 "copy": false, 00:14:44.519 "nvme_iov_md": false 00:14:44.519 }, 00:14:44.519 "memory_domains": [ 00:14:44.519 { 00:14:44.519 "dma_device_id": "system", 00:14:44.519 "dma_device_type": 1 00:14:44.519 }, 00:14:44.519 { 00:14:44.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.519 "dma_device_type": 2 00:14:44.519 }, 00:14:44.519 { 00:14:44.519 "dma_device_id": "system", 00:14:44.519 "dma_device_type": 1 00:14:44.519 }, 00:14:44.519 { 00:14:44.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.519 "dma_device_type": 2 00:14:44.519 }, 00:14:44.519 { 00:14:44.519 "dma_device_id": "system", 00:14:44.519 "dma_device_type": 1 00:14:44.519 }, 00:14:44.519 { 00:14:44.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.519 "dma_device_type": 2 00:14:44.519 }, 00:14:44.519 { 00:14:44.519 "dma_device_id": "system", 00:14:44.519 "dma_device_type": 1 00:14:44.519 }, 00:14:44.519 { 00:14:44.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.519 "dma_device_type": 2 00:14:44.519 } 00:14:44.519 ], 00:14:44.519 "driver_specific": { 00:14:44.519 "raid": { 00:14:44.519 "uuid": "87818297-b2f5-465d-8a42-211ccd8ce16c", 00:14:44.519 "strip_size_kb": 0, 00:14:44.519 "state": "online", 00:14:44.519 "raid_level": "raid1", 00:14:44.519 "superblock": false, 00:14:44.519 "num_base_bdevs": 4, 00:14:44.519 "num_base_bdevs_discovered": 4, 00:14:44.519 "num_base_bdevs_operational": 4, 00:14:44.519 "base_bdevs_list": [ 00:14:44.519 { 00:14:44.519 "name": "NewBaseBdev", 00:14:44.519 "uuid": "5f2ebc81-b4df-4da7-bd04-21d2250f7689", 00:14:44.519 "is_configured": true, 00:14:44.519 "data_offset": 0, 00:14:44.519 "data_size": 65536 00:14:44.519 }, 00:14:44.519 { 00:14:44.519 "name": "BaseBdev2", 00:14:44.519 "uuid": "72c9e288-ac30-45eb-92cd-b05cfdbcab7e", 00:14:44.519 "is_configured": true, 00:14:44.519 "data_offset": 0, 00:14:44.519 "data_size": 65536 00:14:44.519 }, 00:14:44.519 { 00:14:44.519 "name": "BaseBdev3", 00:14:44.519 "uuid": "1d5a4999-a938-4121-8234-de505dba7f01", 00:14:44.519 "is_configured": true, 00:14:44.519 "data_offset": 0, 00:14:44.519 "data_size": 65536 00:14:44.519 }, 00:14:44.519 { 00:14:44.519 "name": "BaseBdev4", 00:14:44.519 "uuid": "7b16bbfa-9f3f-4a2f-8fda-2d94375c6038", 00:14:44.519 "is_configured": true, 00:14:44.519 "data_offset": 0, 00:14:44.519 "data_size": 65536 00:14:44.519 } 00:14:44.519 ] 00:14:44.519 } 00:14:44.519 } 00:14:44.519 }' 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:44.519 BaseBdev2 00:14:44.519 BaseBdev3 00:14:44.519 BaseBdev4' 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:44.519 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.520 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.520 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.779 [2024-12-06 15:41:27.911612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.779 [2024-12-06 15:41:27.911753] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.779 [2024-12-06 15:41:27.911857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.779 [2024-12-06 15:41:27.912199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.779 [2024-12-06 15:41:27.912216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73215 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73215 ']' 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73215 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73215 00:14:44.779 killing process with pid 73215 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73215' 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73215 00:14:44.779 [2024-12-06 15:41:27.961405] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:44.779 15:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73215 00:14:45.346 [2024-12-06 15:41:28.400332] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.724 15:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:46.724 00:14:46.724 real 0m11.823s 00:14:46.724 user 0m18.392s 00:14:46.724 sys 0m2.521s 00:14:46.724 ************************************ 00:14:46.724 END TEST raid_state_function_test 00:14:46.724 ************************************ 00:14:46.724 15:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.724 15:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.724 15:41:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:14:46.724 15:41:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:46.724 15:41:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.724 15:41:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.724 ************************************ 00:14:46.724 START TEST raid_state_function_test_sb 00:14:46.724 ************************************ 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73883 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73883' 00:14:46.725 Process raid pid: 73883 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73883 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73883 ']' 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.725 15:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.725 [2024-12-06 15:41:29.864777] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:14:46.725 [2024-12-06 15:41:29.864906] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.984 [2024-12-06 15:41:30.044360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.984 [2024-12-06 15:41:30.188646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.243 [2024-12-06 15:41:30.436686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.243 [2024-12-06 15:41:30.436739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.502 15:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.502 15:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:47.502 15:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:47.502 15:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.502 15:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.502 [2024-12-06 15:41:30.691944] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:47.502 [2024-12-06 15:41:30.692015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:47.502 [2024-12-06 15:41:30.692028] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.502 [2024-12-06 15:41:30.692043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.502 [2024-12-06 15:41:30.692051] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:47.502 [2024-12-06 15:41:30.692063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:47.502 [2024-12-06 15:41:30.692071] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:47.502 [2024-12-06 15:41:30.692084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:47.502 15:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.502 15:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:47.502 15:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.502 15:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.502 15:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.502 15:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.502 15:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.502 15:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.503 15:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.503 15:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.503 15:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.503 15:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.503 15:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.503 15:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.503 15:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.503 15:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.503 15:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.503 "name": "Existed_Raid", 00:14:47.503 "uuid": "9e1081eb-7af3-4bdf-a9d7-9fe7864a497c", 00:14:47.503 "strip_size_kb": 0, 00:14:47.503 "state": "configuring", 00:14:47.503 "raid_level": "raid1", 00:14:47.503 "superblock": true, 00:14:47.503 "num_base_bdevs": 4, 00:14:47.503 "num_base_bdevs_discovered": 0, 00:14:47.503 "num_base_bdevs_operational": 4, 00:14:47.503 "base_bdevs_list": [ 00:14:47.503 { 00:14:47.503 "name": "BaseBdev1", 00:14:47.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.503 "is_configured": false, 00:14:47.503 "data_offset": 0, 00:14:47.503 "data_size": 0 00:14:47.503 }, 00:14:47.503 { 00:14:47.503 "name": "BaseBdev2", 00:14:47.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.503 "is_configured": false, 00:14:47.503 "data_offset": 0, 00:14:47.503 "data_size": 0 00:14:47.503 }, 00:14:47.503 { 00:14:47.503 "name": "BaseBdev3", 00:14:47.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.503 "is_configured": false, 00:14:47.503 "data_offset": 0, 00:14:47.503 "data_size": 0 00:14:47.503 }, 00:14:47.503 { 00:14:47.503 "name": "BaseBdev4", 00:14:47.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.503 "is_configured": false, 00:14:47.503 "data_offset": 0, 00:14:47.503 "data_size": 0 00:14:47.503 } 00:14:47.503 ] 00:14:47.503 }' 00:14:47.503 15:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.503 15:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.071 [2024-12-06 15:41:31.115497] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.071 [2024-12-06 15:41:31.115558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.071 [2024-12-06 15:41:31.127449] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.071 [2024-12-06 15:41:31.127512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.071 [2024-12-06 15:41:31.127525] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.071 [2024-12-06 15:41:31.127539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.071 [2024-12-06 15:41:31.127547] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:48.071 [2024-12-06 15:41:31.127561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:48.071 [2024-12-06 15:41:31.127568] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:48.071 [2024-12-06 15:41:31.127582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.071 [2024-12-06 15:41:31.182962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.071 BaseBdev1 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.071 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.071 [ 00:14:48.071 { 00:14:48.071 "name": "BaseBdev1", 00:14:48.071 "aliases": [ 00:14:48.071 "3b55355a-7e09-4c4b-a1f7-35f1833c58d5" 00:14:48.071 ], 00:14:48.071 "product_name": "Malloc disk", 00:14:48.071 "block_size": 512, 00:14:48.071 "num_blocks": 65536, 00:14:48.071 "uuid": "3b55355a-7e09-4c4b-a1f7-35f1833c58d5", 00:14:48.071 "assigned_rate_limits": { 00:14:48.071 "rw_ios_per_sec": 0, 00:14:48.071 "rw_mbytes_per_sec": 0, 00:14:48.071 "r_mbytes_per_sec": 0, 00:14:48.071 "w_mbytes_per_sec": 0 00:14:48.071 }, 00:14:48.071 "claimed": true, 00:14:48.071 "claim_type": "exclusive_write", 00:14:48.071 "zoned": false, 00:14:48.072 "supported_io_types": { 00:14:48.072 "read": true, 00:14:48.072 "write": true, 00:14:48.072 "unmap": true, 00:14:48.072 "flush": true, 00:14:48.072 "reset": true, 00:14:48.072 "nvme_admin": false, 00:14:48.072 "nvme_io": false, 00:14:48.072 "nvme_io_md": false, 00:14:48.072 "write_zeroes": true, 00:14:48.072 "zcopy": true, 00:14:48.072 "get_zone_info": false, 00:14:48.072 "zone_management": false, 00:14:48.072 "zone_append": false, 00:14:48.072 "compare": false, 00:14:48.072 "compare_and_write": false, 00:14:48.072 "abort": true, 00:14:48.072 "seek_hole": false, 00:14:48.072 "seek_data": false, 00:14:48.072 "copy": true, 00:14:48.072 "nvme_iov_md": false 00:14:48.072 }, 00:14:48.072 "memory_domains": [ 00:14:48.072 { 00:14:48.072 "dma_device_id": "system", 00:14:48.072 "dma_device_type": 1 00:14:48.072 }, 00:14:48.072 { 00:14:48.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.072 "dma_device_type": 2 00:14:48.072 } 00:14:48.072 ], 00:14:48.072 "driver_specific": {} 00:14:48.072 } 00:14:48.072 ] 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.072 "name": "Existed_Raid", 00:14:48.072 "uuid": "015e3df0-113a-4516-82d3-710900a68dd9", 00:14:48.072 "strip_size_kb": 0, 00:14:48.072 "state": "configuring", 00:14:48.072 "raid_level": "raid1", 00:14:48.072 "superblock": true, 00:14:48.072 "num_base_bdevs": 4, 00:14:48.072 "num_base_bdevs_discovered": 1, 00:14:48.072 "num_base_bdevs_operational": 4, 00:14:48.072 "base_bdevs_list": [ 00:14:48.072 { 00:14:48.072 "name": "BaseBdev1", 00:14:48.072 "uuid": "3b55355a-7e09-4c4b-a1f7-35f1833c58d5", 00:14:48.072 "is_configured": true, 00:14:48.072 "data_offset": 2048, 00:14:48.072 "data_size": 63488 00:14:48.072 }, 00:14:48.072 { 00:14:48.072 "name": "BaseBdev2", 00:14:48.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.072 "is_configured": false, 00:14:48.072 "data_offset": 0, 00:14:48.072 "data_size": 0 00:14:48.072 }, 00:14:48.072 { 00:14:48.072 "name": "BaseBdev3", 00:14:48.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.072 "is_configured": false, 00:14:48.072 "data_offset": 0, 00:14:48.072 "data_size": 0 00:14:48.072 }, 00:14:48.072 { 00:14:48.072 "name": "BaseBdev4", 00:14:48.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.072 "is_configured": false, 00:14:48.072 "data_offset": 0, 00:14:48.072 "data_size": 0 00:14:48.072 } 00:14:48.072 ] 00:14:48.072 }' 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.072 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.331 [2024-12-06 15:41:31.586629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.331 [2024-12-06 15:41:31.586680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.331 [2024-12-06 15:41:31.598708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.331 [2024-12-06 15:41:31.601071] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.331 [2024-12-06 15:41:31.601123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.331 [2024-12-06 15:41:31.601135] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:48.331 [2024-12-06 15:41:31.601150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:48.331 [2024-12-06 15:41:31.601158] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:48.331 [2024-12-06 15:41:31.601170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.331 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.590 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.590 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.590 "name": "Existed_Raid", 00:14:48.590 "uuid": "b9ee67e9-560e-4192-b447-792697e3e489", 00:14:48.590 "strip_size_kb": 0, 00:14:48.590 "state": "configuring", 00:14:48.590 "raid_level": "raid1", 00:14:48.590 "superblock": true, 00:14:48.590 "num_base_bdevs": 4, 00:14:48.590 "num_base_bdevs_discovered": 1, 00:14:48.590 "num_base_bdevs_operational": 4, 00:14:48.590 "base_bdevs_list": [ 00:14:48.590 { 00:14:48.590 "name": "BaseBdev1", 00:14:48.590 "uuid": "3b55355a-7e09-4c4b-a1f7-35f1833c58d5", 00:14:48.590 "is_configured": true, 00:14:48.590 "data_offset": 2048, 00:14:48.590 "data_size": 63488 00:14:48.590 }, 00:14:48.590 { 00:14:48.590 "name": "BaseBdev2", 00:14:48.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.590 "is_configured": false, 00:14:48.590 "data_offset": 0, 00:14:48.590 "data_size": 0 00:14:48.590 }, 00:14:48.590 { 00:14:48.590 "name": "BaseBdev3", 00:14:48.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.590 "is_configured": false, 00:14:48.590 "data_offset": 0, 00:14:48.590 "data_size": 0 00:14:48.590 }, 00:14:48.590 { 00:14:48.590 "name": "BaseBdev4", 00:14:48.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.590 "is_configured": false, 00:14:48.590 "data_offset": 0, 00:14:48.590 "data_size": 0 00:14:48.590 } 00:14:48.590 ] 00:14:48.590 }' 00:14:48.590 15:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.590 15:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.850 [2024-12-06 15:41:32.071915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.850 BaseBdev2 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.850 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.850 [ 00:14:48.850 { 00:14:48.850 "name": "BaseBdev2", 00:14:48.850 "aliases": [ 00:14:48.850 "33cc1b18-52eb-4740-bc63-2cdd1307f3af" 00:14:48.850 ], 00:14:48.850 "product_name": "Malloc disk", 00:14:48.851 "block_size": 512, 00:14:48.851 "num_blocks": 65536, 00:14:48.851 "uuid": "33cc1b18-52eb-4740-bc63-2cdd1307f3af", 00:14:48.851 "assigned_rate_limits": { 00:14:48.851 "rw_ios_per_sec": 0, 00:14:48.851 "rw_mbytes_per_sec": 0, 00:14:48.851 "r_mbytes_per_sec": 0, 00:14:48.851 "w_mbytes_per_sec": 0 00:14:48.851 }, 00:14:48.851 "claimed": true, 00:14:48.851 "claim_type": "exclusive_write", 00:14:48.851 "zoned": false, 00:14:48.851 "supported_io_types": { 00:14:48.851 "read": true, 00:14:48.851 "write": true, 00:14:48.851 "unmap": true, 00:14:48.851 "flush": true, 00:14:48.851 "reset": true, 00:14:48.851 "nvme_admin": false, 00:14:48.851 "nvme_io": false, 00:14:48.851 "nvme_io_md": false, 00:14:48.851 "write_zeroes": true, 00:14:48.851 "zcopy": true, 00:14:48.851 "get_zone_info": false, 00:14:48.851 "zone_management": false, 00:14:48.851 "zone_append": false, 00:14:48.851 "compare": false, 00:14:48.851 "compare_and_write": false, 00:14:48.851 "abort": true, 00:14:48.851 "seek_hole": false, 00:14:48.851 "seek_data": false, 00:14:48.851 "copy": true, 00:14:48.851 "nvme_iov_md": false 00:14:48.851 }, 00:14:48.851 "memory_domains": [ 00:14:48.851 { 00:14:48.851 "dma_device_id": "system", 00:14:48.851 "dma_device_type": 1 00:14:48.851 }, 00:14:48.851 { 00:14:48.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.851 "dma_device_type": 2 00:14:48.851 } 00:14:48.851 ], 00:14:48.851 "driver_specific": {} 00:14:48.851 } 00:14:48.851 ] 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.851 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.114 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.114 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.114 "name": "Existed_Raid", 00:14:49.114 "uuid": "b9ee67e9-560e-4192-b447-792697e3e489", 00:14:49.114 "strip_size_kb": 0, 00:14:49.114 "state": "configuring", 00:14:49.114 "raid_level": "raid1", 00:14:49.114 "superblock": true, 00:14:49.114 "num_base_bdevs": 4, 00:14:49.114 "num_base_bdevs_discovered": 2, 00:14:49.114 "num_base_bdevs_operational": 4, 00:14:49.114 "base_bdevs_list": [ 00:14:49.114 { 00:14:49.114 "name": "BaseBdev1", 00:14:49.114 "uuid": "3b55355a-7e09-4c4b-a1f7-35f1833c58d5", 00:14:49.114 "is_configured": true, 00:14:49.114 "data_offset": 2048, 00:14:49.114 "data_size": 63488 00:14:49.114 }, 00:14:49.114 { 00:14:49.114 "name": "BaseBdev2", 00:14:49.114 "uuid": "33cc1b18-52eb-4740-bc63-2cdd1307f3af", 00:14:49.114 "is_configured": true, 00:14:49.114 "data_offset": 2048, 00:14:49.114 "data_size": 63488 00:14:49.114 }, 00:14:49.114 { 00:14:49.114 "name": "BaseBdev3", 00:14:49.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.114 "is_configured": false, 00:14:49.114 "data_offset": 0, 00:14:49.114 "data_size": 0 00:14:49.114 }, 00:14:49.114 { 00:14:49.114 "name": "BaseBdev4", 00:14:49.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.114 "is_configured": false, 00:14:49.114 "data_offset": 0, 00:14:49.114 "data_size": 0 00:14:49.114 } 00:14:49.114 ] 00:14:49.114 }' 00:14:49.114 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.114 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.396 [2024-12-06 15:41:32.558275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.396 BaseBdev3 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.396 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.396 [ 00:14:49.396 { 00:14:49.396 "name": "BaseBdev3", 00:14:49.396 "aliases": [ 00:14:49.396 "de62d9c2-f858-4ca2-80b9-a387f7d91bae" 00:14:49.396 ], 00:14:49.396 "product_name": "Malloc disk", 00:14:49.396 "block_size": 512, 00:14:49.396 "num_blocks": 65536, 00:14:49.396 "uuid": "de62d9c2-f858-4ca2-80b9-a387f7d91bae", 00:14:49.396 "assigned_rate_limits": { 00:14:49.396 "rw_ios_per_sec": 0, 00:14:49.396 "rw_mbytes_per_sec": 0, 00:14:49.396 "r_mbytes_per_sec": 0, 00:14:49.396 "w_mbytes_per_sec": 0 00:14:49.396 }, 00:14:49.396 "claimed": true, 00:14:49.396 "claim_type": "exclusive_write", 00:14:49.396 "zoned": false, 00:14:49.396 "supported_io_types": { 00:14:49.396 "read": true, 00:14:49.396 "write": true, 00:14:49.396 "unmap": true, 00:14:49.396 "flush": true, 00:14:49.396 "reset": true, 00:14:49.396 "nvme_admin": false, 00:14:49.396 "nvme_io": false, 00:14:49.396 "nvme_io_md": false, 00:14:49.396 "write_zeroes": true, 00:14:49.396 "zcopy": true, 00:14:49.396 "get_zone_info": false, 00:14:49.396 "zone_management": false, 00:14:49.396 "zone_append": false, 00:14:49.396 "compare": false, 00:14:49.396 "compare_and_write": false, 00:14:49.396 "abort": true, 00:14:49.396 "seek_hole": false, 00:14:49.396 "seek_data": false, 00:14:49.396 "copy": true, 00:14:49.396 "nvme_iov_md": false 00:14:49.396 }, 00:14:49.396 "memory_domains": [ 00:14:49.396 { 00:14:49.396 "dma_device_id": "system", 00:14:49.396 "dma_device_type": 1 00:14:49.396 }, 00:14:49.396 { 00:14:49.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.396 "dma_device_type": 2 00:14:49.397 } 00:14:49.397 ], 00:14:49.397 "driver_specific": {} 00:14:49.397 } 00:14:49.397 ] 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.397 "name": "Existed_Raid", 00:14:49.397 "uuid": "b9ee67e9-560e-4192-b447-792697e3e489", 00:14:49.397 "strip_size_kb": 0, 00:14:49.397 "state": "configuring", 00:14:49.397 "raid_level": "raid1", 00:14:49.397 "superblock": true, 00:14:49.397 "num_base_bdevs": 4, 00:14:49.397 "num_base_bdevs_discovered": 3, 00:14:49.397 "num_base_bdevs_operational": 4, 00:14:49.397 "base_bdevs_list": [ 00:14:49.397 { 00:14:49.397 "name": "BaseBdev1", 00:14:49.397 "uuid": "3b55355a-7e09-4c4b-a1f7-35f1833c58d5", 00:14:49.397 "is_configured": true, 00:14:49.397 "data_offset": 2048, 00:14:49.397 "data_size": 63488 00:14:49.397 }, 00:14:49.397 { 00:14:49.397 "name": "BaseBdev2", 00:14:49.397 "uuid": "33cc1b18-52eb-4740-bc63-2cdd1307f3af", 00:14:49.397 "is_configured": true, 00:14:49.397 "data_offset": 2048, 00:14:49.397 "data_size": 63488 00:14:49.397 }, 00:14:49.397 { 00:14:49.397 "name": "BaseBdev3", 00:14:49.397 "uuid": "de62d9c2-f858-4ca2-80b9-a387f7d91bae", 00:14:49.397 "is_configured": true, 00:14:49.397 "data_offset": 2048, 00:14:49.397 "data_size": 63488 00:14:49.397 }, 00:14:49.397 { 00:14:49.397 "name": "BaseBdev4", 00:14:49.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.397 "is_configured": false, 00:14:49.397 "data_offset": 0, 00:14:49.397 "data_size": 0 00:14:49.397 } 00:14:49.397 ] 00:14:49.397 }' 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.397 15:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.965 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:49.965 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.965 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.965 [2024-12-06 15:41:33.087629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:49.965 [2024-12-06 15:41:33.088181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:49.965 [2024-12-06 15:41:33.088206] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:49.965 [2024-12-06 15:41:33.088569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:49.965 BaseBdev4 00:14:49.965 [2024-12-06 15:41:33.088764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:49.965 [2024-12-06 15:41:33.088780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:49.965 [2024-12-06 15:41:33.088944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.965 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.965 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:49.965 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:49.965 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.965 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:49.965 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.966 [ 00:14:49.966 { 00:14:49.966 "name": "BaseBdev4", 00:14:49.966 "aliases": [ 00:14:49.966 "3a9b0ebd-114f-4b1c-b6ae-1be18211dd82" 00:14:49.966 ], 00:14:49.966 "product_name": "Malloc disk", 00:14:49.966 "block_size": 512, 00:14:49.966 "num_blocks": 65536, 00:14:49.966 "uuid": "3a9b0ebd-114f-4b1c-b6ae-1be18211dd82", 00:14:49.966 "assigned_rate_limits": { 00:14:49.966 "rw_ios_per_sec": 0, 00:14:49.966 "rw_mbytes_per_sec": 0, 00:14:49.966 "r_mbytes_per_sec": 0, 00:14:49.966 "w_mbytes_per_sec": 0 00:14:49.966 }, 00:14:49.966 "claimed": true, 00:14:49.966 "claim_type": "exclusive_write", 00:14:49.966 "zoned": false, 00:14:49.966 "supported_io_types": { 00:14:49.966 "read": true, 00:14:49.966 "write": true, 00:14:49.966 "unmap": true, 00:14:49.966 "flush": true, 00:14:49.966 "reset": true, 00:14:49.966 "nvme_admin": false, 00:14:49.966 "nvme_io": false, 00:14:49.966 "nvme_io_md": false, 00:14:49.966 "write_zeroes": true, 00:14:49.966 "zcopy": true, 00:14:49.966 "get_zone_info": false, 00:14:49.966 "zone_management": false, 00:14:49.966 "zone_append": false, 00:14:49.966 "compare": false, 00:14:49.966 "compare_and_write": false, 00:14:49.966 "abort": true, 00:14:49.966 "seek_hole": false, 00:14:49.966 "seek_data": false, 00:14:49.966 "copy": true, 00:14:49.966 "nvme_iov_md": false 00:14:49.966 }, 00:14:49.966 "memory_domains": [ 00:14:49.966 { 00:14:49.966 "dma_device_id": "system", 00:14:49.966 "dma_device_type": 1 00:14:49.966 }, 00:14:49.966 { 00:14:49.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.966 "dma_device_type": 2 00:14:49.966 } 00:14:49.966 ], 00:14:49.966 "driver_specific": {} 00:14:49.966 } 00:14:49.966 ] 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.966 "name": "Existed_Raid", 00:14:49.966 "uuid": "b9ee67e9-560e-4192-b447-792697e3e489", 00:14:49.966 "strip_size_kb": 0, 00:14:49.966 "state": "online", 00:14:49.966 "raid_level": "raid1", 00:14:49.966 "superblock": true, 00:14:49.966 "num_base_bdevs": 4, 00:14:49.966 "num_base_bdevs_discovered": 4, 00:14:49.966 "num_base_bdevs_operational": 4, 00:14:49.966 "base_bdevs_list": [ 00:14:49.966 { 00:14:49.966 "name": "BaseBdev1", 00:14:49.966 "uuid": "3b55355a-7e09-4c4b-a1f7-35f1833c58d5", 00:14:49.966 "is_configured": true, 00:14:49.966 "data_offset": 2048, 00:14:49.966 "data_size": 63488 00:14:49.966 }, 00:14:49.966 { 00:14:49.966 "name": "BaseBdev2", 00:14:49.966 "uuid": "33cc1b18-52eb-4740-bc63-2cdd1307f3af", 00:14:49.966 "is_configured": true, 00:14:49.966 "data_offset": 2048, 00:14:49.966 "data_size": 63488 00:14:49.966 }, 00:14:49.966 { 00:14:49.966 "name": "BaseBdev3", 00:14:49.966 "uuid": "de62d9c2-f858-4ca2-80b9-a387f7d91bae", 00:14:49.966 "is_configured": true, 00:14:49.966 "data_offset": 2048, 00:14:49.966 "data_size": 63488 00:14:49.966 }, 00:14:49.966 { 00:14:49.966 "name": "BaseBdev4", 00:14:49.966 "uuid": "3a9b0ebd-114f-4b1c-b6ae-1be18211dd82", 00:14:49.966 "is_configured": true, 00:14:49.966 "data_offset": 2048, 00:14:49.966 "data_size": 63488 00:14:49.966 } 00:14:49.966 ] 00:14:49.966 }' 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.966 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.535 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:50.535 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:50.535 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:50.535 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:50.535 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:50.535 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:50.535 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:50.535 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:50.535 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.535 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.535 [2024-12-06 15:41:33.567365] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.535 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.535 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:50.535 "name": "Existed_Raid", 00:14:50.535 "aliases": [ 00:14:50.535 "b9ee67e9-560e-4192-b447-792697e3e489" 00:14:50.535 ], 00:14:50.535 "product_name": "Raid Volume", 00:14:50.535 "block_size": 512, 00:14:50.535 "num_blocks": 63488, 00:14:50.535 "uuid": "b9ee67e9-560e-4192-b447-792697e3e489", 00:14:50.535 "assigned_rate_limits": { 00:14:50.535 "rw_ios_per_sec": 0, 00:14:50.535 "rw_mbytes_per_sec": 0, 00:14:50.535 "r_mbytes_per_sec": 0, 00:14:50.535 "w_mbytes_per_sec": 0 00:14:50.535 }, 00:14:50.535 "claimed": false, 00:14:50.535 "zoned": false, 00:14:50.535 "supported_io_types": { 00:14:50.535 "read": true, 00:14:50.535 "write": true, 00:14:50.535 "unmap": false, 00:14:50.535 "flush": false, 00:14:50.535 "reset": true, 00:14:50.535 "nvme_admin": false, 00:14:50.535 "nvme_io": false, 00:14:50.535 "nvme_io_md": false, 00:14:50.535 "write_zeroes": true, 00:14:50.535 "zcopy": false, 00:14:50.535 "get_zone_info": false, 00:14:50.535 "zone_management": false, 00:14:50.535 "zone_append": false, 00:14:50.535 "compare": false, 00:14:50.535 "compare_and_write": false, 00:14:50.535 "abort": false, 00:14:50.535 "seek_hole": false, 00:14:50.535 "seek_data": false, 00:14:50.535 "copy": false, 00:14:50.535 "nvme_iov_md": false 00:14:50.535 }, 00:14:50.535 "memory_domains": [ 00:14:50.535 { 00:14:50.535 "dma_device_id": "system", 00:14:50.535 "dma_device_type": 1 00:14:50.535 }, 00:14:50.535 { 00:14:50.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.535 "dma_device_type": 2 00:14:50.535 }, 00:14:50.535 { 00:14:50.535 "dma_device_id": "system", 00:14:50.535 "dma_device_type": 1 00:14:50.535 }, 00:14:50.535 { 00:14:50.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.535 "dma_device_type": 2 00:14:50.535 }, 00:14:50.535 { 00:14:50.535 "dma_device_id": "system", 00:14:50.535 "dma_device_type": 1 00:14:50.536 }, 00:14:50.536 { 00:14:50.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.536 "dma_device_type": 2 00:14:50.536 }, 00:14:50.536 { 00:14:50.536 "dma_device_id": "system", 00:14:50.536 "dma_device_type": 1 00:14:50.536 }, 00:14:50.536 { 00:14:50.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.536 "dma_device_type": 2 00:14:50.536 } 00:14:50.536 ], 00:14:50.536 "driver_specific": { 00:14:50.536 "raid": { 00:14:50.536 "uuid": "b9ee67e9-560e-4192-b447-792697e3e489", 00:14:50.536 "strip_size_kb": 0, 00:14:50.536 "state": "online", 00:14:50.536 "raid_level": "raid1", 00:14:50.536 "superblock": true, 00:14:50.536 "num_base_bdevs": 4, 00:14:50.536 "num_base_bdevs_discovered": 4, 00:14:50.536 "num_base_bdevs_operational": 4, 00:14:50.536 "base_bdevs_list": [ 00:14:50.536 { 00:14:50.536 "name": "BaseBdev1", 00:14:50.536 "uuid": "3b55355a-7e09-4c4b-a1f7-35f1833c58d5", 00:14:50.536 "is_configured": true, 00:14:50.536 "data_offset": 2048, 00:14:50.536 "data_size": 63488 00:14:50.536 }, 00:14:50.536 { 00:14:50.536 "name": "BaseBdev2", 00:14:50.536 "uuid": "33cc1b18-52eb-4740-bc63-2cdd1307f3af", 00:14:50.536 "is_configured": true, 00:14:50.536 "data_offset": 2048, 00:14:50.536 "data_size": 63488 00:14:50.536 }, 00:14:50.536 { 00:14:50.536 "name": "BaseBdev3", 00:14:50.536 "uuid": "de62d9c2-f858-4ca2-80b9-a387f7d91bae", 00:14:50.536 "is_configured": true, 00:14:50.536 "data_offset": 2048, 00:14:50.536 "data_size": 63488 00:14:50.536 }, 00:14:50.536 { 00:14:50.536 "name": "BaseBdev4", 00:14:50.536 "uuid": "3a9b0ebd-114f-4b1c-b6ae-1be18211dd82", 00:14:50.536 "is_configured": true, 00:14:50.536 "data_offset": 2048, 00:14:50.536 "data_size": 63488 00:14:50.536 } 00:14:50.536 ] 00:14:50.536 } 00:14:50.536 } 00:14:50.536 }' 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:50.536 BaseBdev2 00:14:50.536 BaseBdev3 00:14:50.536 BaseBdev4' 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.536 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.795 [2024-12-06 15:41:33.874637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.795 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.796 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.796 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.796 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.796 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.796 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.796 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.796 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.796 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.796 15:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.796 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.796 15:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.796 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.796 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.796 "name": "Existed_Raid", 00:14:50.796 "uuid": "b9ee67e9-560e-4192-b447-792697e3e489", 00:14:50.796 "strip_size_kb": 0, 00:14:50.796 "state": "online", 00:14:50.796 "raid_level": "raid1", 00:14:50.796 "superblock": true, 00:14:50.796 "num_base_bdevs": 4, 00:14:50.796 "num_base_bdevs_discovered": 3, 00:14:50.796 "num_base_bdevs_operational": 3, 00:14:50.796 "base_bdevs_list": [ 00:14:50.796 { 00:14:50.796 "name": null, 00:14:50.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.796 "is_configured": false, 00:14:50.796 "data_offset": 0, 00:14:50.796 "data_size": 63488 00:14:50.796 }, 00:14:50.796 { 00:14:50.796 "name": "BaseBdev2", 00:14:50.796 "uuid": "33cc1b18-52eb-4740-bc63-2cdd1307f3af", 00:14:50.796 "is_configured": true, 00:14:50.796 "data_offset": 2048, 00:14:50.796 "data_size": 63488 00:14:50.796 }, 00:14:50.796 { 00:14:50.796 "name": "BaseBdev3", 00:14:50.796 "uuid": "de62d9c2-f858-4ca2-80b9-a387f7d91bae", 00:14:50.796 "is_configured": true, 00:14:50.796 "data_offset": 2048, 00:14:50.796 "data_size": 63488 00:14:50.796 }, 00:14:50.796 { 00:14:50.796 "name": "BaseBdev4", 00:14:50.796 "uuid": "3a9b0ebd-114f-4b1c-b6ae-1be18211dd82", 00:14:50.796 "is_configured": true, 00:14:50.796 "data_offset": 2048, 00:14:50.796 "data_size": 63488 00:14:50.796 } 00:14:50.796 ] 00:14:50.796 }' 00:14:50.796 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.796 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.370 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:51.370 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:51.370 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.370 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.370 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.370 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:51.370 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.370 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:51.370 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.371 [2024-12-06 15:41:34.484877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.371 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.371 [2024-12-06 15:41:34.651221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.630 [2024-12-06 15:41:34.814049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:51.630 [2024-12-06 15:41:34.814208] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.630 [2024-12-06 15:41:34.920419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.630 [2024-12-06 15:41:34.920729] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.630 [2024-12-06 15:41:34.920761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:51.630 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:51.889 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.889 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.889 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:51.889 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.889 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.889 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:51.889 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:51.889 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:51.889 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:51.889 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:51.889 15:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:51.889 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.889 15:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.889 BaseBdev2 00:14:51.889 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.889 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:51.889 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:51.889 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:51.889 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:51.889 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:51.889 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:51.889 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:51.889 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.889 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.889 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.889 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:51.889 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.890 [ 00:14:51.890 { 00:14:51.890 "name": "BaseBdev2", 00:14:51.890 "aliases": [ 00:14:51.890 "a9aef3ed-abd3-4996-a82f-74b8bfc5b8eb" 00:14:51.890 ], 00:14:51.890 "product_name": "Malloc disk", 00:14:51.890 "block_size": 512, 00:14:51.890 "num_blocks": 65536, 00:14:51.890 "uuid": "a9aef3ed-abd3-4996-a82f-74b8bfc5b8eb", 00:14:51.890 "assigned_rate_limits": { 00:14:51.890 "rw_ios_per_sec": 0, 00:14:51.890 "rw_mbytes_per_sec": 0, 00:14:51.890 "r_mbytes_per_sec": 0, 00:14:51.890 "w_mbytes_per_sec": 0 00:14:51.890 }, 00:14:51.890 "claimed": false, 00:14:51.890 "zoned": false, 00:14:51.890 "supported_io_types": { 00:14:51.890 "read": true, 00:14:51.890 "write": true, 00:14:51.890 "unmap": true, 00:14:51.890 "flush": true, 00:14:51.890 "reset": true, 00:14:51.890 "nvme_admin": false, 00:14:51.890 "nvme_io": false, 00:14:51.890 "nvme_io_md": false, 00:14:51.890 "write_zeroes": true, 00:14:51.890 "zcopy": true, 00:14:51.890 "get_zone_info": false, 00:14:51.890 "zone_management": false, 00:14:51.890 "zone_append": false, 00:14:51.890 "compare": false, 00:14:51.890 "compare_and_write": false, 00:14:51.890 "abort": true, 00:14:51.890 "seek_hole": false, 00:14:51.890 "seek_data": false, 00:14:51.890 "copy": true, 00:14:51.890 "nvme_iov_md": false 00:14:51.890 }, 00:14:51.890 "memory_domains": [ 00:14:51.890 { 00:14:51.890 "dma_device_id": "system", 00:14:51.890 "dma_device_type": 1 00:14:51.890 }, 00:14:51.890 { 00:14:51.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.890 "dma_device_type": 2 00:14:51.890 } 00:14:51.890 ], 00:14:51.890 "driver_specific": {} 00:14:51.890 } 00:14:51.890 ] 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.890 BaseBdev3 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.890 [ 00:14:51.890 { 00:14:51.890 "name": "BaseBdev3", 00:14:51.890 "aliases": [ 00:14:51.890 "1f38ee31-e40e-47d9-87d3-d9ff4e430aa4" 00:14:51.890 ], 00:14:51.890 "product_name": "Malloc disk", 00:14:51.890 "block_size": 512, 00:14:51.890 "num_blocks": 65536, 00:14:51.890 "uuid": "1f38ee31-e40e-47d9-87d3-d9ff4e430aa4", 00:14:51.890 "assigned_rate_limits": { 00:14:51.890 "rw_ios_per_sec": 0, 00:14:51.890 "rw_mbytes_per_sec": 0, 00:14:51.890 "r_mbytes_per_sec": 0, 00:14:51.890 "w_mbytes_per_sec": 0 00:14:51.890 }, 00:14:51.890 "claimed": false, 00:14:51.890 "zoned": false, 00:14:51.890 "supported_io_types": { 00:14:51.890 "read": true, 00:14:51.890 "write": true, 00:14:51.890 "unmap": true, 00:14:51.890 "flush": true, 00:14:51.890 "reset": true, 00:14:51.890 "nvme_admin": false, 00:14:51.890 "nvme_io": false, 00:14:51.890 "nvme_io_md": false, 00:14:51.890 "write_zeroes": true, 00:14:51.890 "zcopy": true, 00:14:51.890 "get_zone_info": false, 00:14:51.890 "zone_management": false, 00:14:51.890 "zone_append": false, 00:14:51.890 "compare": false, 00:14:51.890 "compare_and_write": false, 00:14:51.890 "abort": true, 00:14:51.890 "seek_hole": false, 00:14:51.890 "seek_data": false, 00:14:51.890 "copy": true, 00:14:51.890 "nvme_iov_md": false 00:14:51.890 }, 00:14:51.890 "memory_domains": [ 00:14:51.890 { 00:14:51.890 "dma_device_id": "system", 00:14:51.890 "dma_device_type": 1 00:14:51.890 }, 00:14:51.890 { 00:14:51.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.890 "dma_device_type": 2 00:14:51.890 } 00:14:51.890 ], 00:14:51.890 "driver_specific": {} 00:14:51.890 } 00:14:51.890 ] 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.890 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.150 BaseBdev4 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.150 [ 00:14:52.150 { 00:14:52.150 "name": "BaseBdev4", 00:14:52.150 "aliases": [ 00:14:52.150 "333ba48f-fa1e-4289-b0a2-6c529fdce671" 00:14:52.150 ], 00:14:52.150 "product_name": "Malloc disk", 00:14:52.150 "block_size": 512, 00:14:52.150 "num_blocks": 65536, 00:14:52.150 "uuid": "333ba48f-fa1e-4289-b0a2-6c529fdce671", 00:14:52.150 "assigned_rate_limits": { 00:14:52.150 "rw_ios_per_sec": 0, 00:14:52.150 "rw_mbytes_per_sec": 0, 00:14:52.150 "r_mbytes_per_sec": 0, 00:14:52.150 "w_mbytes_per_sec": 0 00:14:52.150 }, 00:14:52.150 "claimed": false, 00:14:52.150 "zoned": false, 00:14:52.150 "supported_io_types": { 00:14:52.150 "read": true, 00:14:52.150 "write": true, 00:14:52.150 "unmap": true, 00:14:52.150 "flush": true, 00:14:52.150 "reset": true, 00:14:52.150 "nvme_admin": false, 00:14:52.150 "nvme_io": false, 00:14:52.150 "nvme_io_md": false, 00:14:52.150 "write_zeroes": true, 00:14:52.150 "zcopy": true, 00:14:52.150 "get_zone_info": false, 00:14:52.150 "zone_management": false, 00:14:52.150 "zone_append": false, 00:14:52.150 "compare": false, 00:14:52.150 "compare_and_write": false, 00:14:52.150 "abort": true, 00:14:52.150 "seek_hole": false, 00:14:52.150 "seek_data": false, 00:14:52.150 "copy": true, 00:14:52.150 "nvme_iov_md": false 00:14:52.150 }, 00:14:52.150 "memory_domains": [ 00:14:52.150 { 00:14:52.150 "dma_device_id": "system", 00:14:52.150 "dma_device_type": 1 00:14:52.150 }, 00:14:52.150 { 00:14:52.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.150 "dma_device_type": 2 00:14:52.150 } 00:14:52.150 ], 00:14:52.150 "driver_specific": {} 00:14:52.150 } 00:14:52.150 ] 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.150 [2024-12-06 15:41:35.271282] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.150 [2024-12-06 15:41:35.271453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.150 [2024-12-06 15:41:35.271594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.150 [2024-12-06 15:41:35.274067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.150 [2024-12-06 15:41:35.274242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.150 "name": "Existed_Raid", 00:14:52.150 "uuid": "d91224f1-539c-406d-bc2f-75a081b94797", 00:14:52.150 "strip_size_kb": 0, 00:14:52.150 "state": "configuring", 00:14:52.150 "raid_level": "raid1", 00:14:52.150 "superblock": true, 00:14:52.150 "num_base_bdevs": 4, 00:14:52.150 "num_base_bdevs_discovered": 3, 00:14:52.150 "num_base_bdevs_operational": 4, 00:14:52.150 "base_bdevs_list": [ 00:14:52.150 { 00:14:52.150 "name": "BaseBdev1", 00:14:52.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.150 "is_configured": false, 00:14:52.150 "data_offset": 0, 00:14:52.150 "data_size": 0 00:14:52.150 }, 00:14:52.150 { 00:14:52.150 "name": "BaseBdev2", 00:14:52.150 "uuid": "a9aef3ed-abd3-4996-a82f-74b8bfc5b8eb", 00:14:52.150 "is_configured": true, 00:14:52.150 "data_offset": 2048, 00:14:52.150 "data_size": 63488 00:14:52.150 }, 00:14:52.150 { 00:14:52.150 "name": "BaseBdev3", 00:14:52.150 "uuid": "1f38ee31-e40e-47d9-87d3-d9ff4e430aa4", 00:14:52.150 "is_configured": true, 00:14:52.150 "data_offset": 2048, 00:14:52.150 "data_size": 63488 00:14:52.150 }, 00:14:52.150 { 00:14:52.150 "name": "BaseBdev4", 00:14:52.150 "uuid": "333ba48f-fa1e-4289-b0a2-6c529fdce671", 00:14:52.150 "is_configured": true, 00:14:52.150 "data_offset": 2048, 00:14:52.150 "data_size": 63488 00:14:52.150 } 00:14:52.150 ] 00:14:52.150 }' 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.150 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.717 [2024-12-06 15:41:35.726680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.717 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.717 "name": "Existed_Raid", 00:14:52.717 "uuid": "d91224f1-539c-406d-bc2f-75a081b94797", 00:14:52.717 "strip_size_kb": 0, 00:14:52.717 "state": "configuring", 00:14:52.717 "raid_level": "raid1", 00:14:52.717 "superblock": true, 00:14:52.717 "num_base_bdevs": 4, 00:14:52.717 "num_base_bdevs_discovered": 2, 00:14:52.717 "num_base_bdevs_operational": 4, 00:14:52.717 "base_bdevs_list": [ 00:14:52.717 { 00:14:52.717 "name": "BaseBdev1", 00:14:52.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.717 "is_configured": false, 00:14:52.717 "data_offset": 0, 00:14:52.717 "data_size": 0 00:14:52.717 }, 00:14:52.717 { 00:14:52.717 "name": null, 00:14:52.717 "uuid": "a9aef3ed-abd3-4996-a82f-74b8bfc5b8eb", 00:14:52.717 "is_configured": false, 00:14:52.718 "data_offset": 0, 00:14:52.718 "data_size": 63488 00:14:52.718 }, 00:14:52.718 { 00:14:52.718 "name": "BaseBdev3", 00:14:52.718 "uuid": "1f38ee31-e40e-47d9-87d3-d9ff4e430aa4", 00:14:52.718 "is_configured": true, 00:14:52.718 "data_offset": 2048, 00:14:52.718 "data_size": 63488 00:14:52.718 }, 00:14:52.718 { 00:14:52.718 "name": "BaseBdev4", 00:14:52.718 "uuid": "333ba48f-fa1e-4289-b0a2-6c529fdce671", 00:14:52.718 "is_configured": true, 00:14:52.718 "data_offset": 2048, 00:14:52.718 "data_size": 63488 00:14:52.718 } 00:14:52.718 ] 00:14:52.718 }' 00:14:52.718 15:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.718 15:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.976 [2024-12-06 15:41:36.226683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.976 BaseBdev1 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.976 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.976 [ 00:14:52.976 { 00:14:52.976 "name": "BaseBdev1", 00:14:52.976 "aliases": [ 00:14:52.976 "43b15bc1-26ac-4a2d-96ca-6709f48a3018" 00:14:52.976 ], 00:14:52.976 "product_name": "Malloc disk", 00:14:52.976 "block_size": 512, 00:14:52.976 "num_blocks": 65536, 00:14:52.976 "uuid": "43b15bc1-26ac-4a2d-96ca-6709f48a3018", 00:14:52.976 "assigned_rate_limits": { 00:14:52.976 "rw_ios_per_sec": 0, 00:14:52.976 "rw_mbytes_per_sec": 0, 00:14:52.976 "r_mbytes_per_sec": 0, 00:14:52.976 "w_mbytes_per_sec": 0 00:14:52.976 }, 00:14:52.976 "claimed": true, 00:14:52.976 "claim_type": "exclusive_write", 00:14:52.976 "zoned": false, 00:14:52.976 "supported_io_types": { 00:14:52.976 "read": true, 00:14:52.976 "write": true, 00:14:52.976 "unmap": true, 00:14:52.976 "flush": true, 00:14:52.976 "reset": true, 00:14:52.976 "nvme_admin": false, 00:14:52.976 "nvme_io": false, 00:14:52.976 "nvme_io_md": false, 00:14:52.976 "write_zeroes": true, 00:14:52.976 "zcopy": true, 00:14:52.976 "get_zone_info": false, 00:14:52.976 "zone_management": false, 00:14:52.976 "zone_append": false, 00:14:52.976 "compare": false, 00:14:52.976 "compare_and_write": false, 00:14:52.976 "abort": true, 00:14:52.976 "seek_hole": false, 00:14:52.976 "seek_data": false, 00:14:52.976 "copy": true, 00:14:52.976 "nvme_iov_md": false 00:14:52.977 }, 00:14:52.977 "memory_domains": [ 00:14:52.977 { 00:14:52.977 "dma_device_id": "system", 00:14:53.236 "dma_device_type": 1 00:14:53.236 }, 00:14:53.236 { 00:14:53.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.236 "dma_device_type": 2 00:14:53.236 } 00:14:53.236 ], 00:14:53.236 "driver_specific": {} 00:14:53.236 } 00:14:53.236 ] 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.236 "name": "Existed_Raid", 00:14:53.236 "uuid": "d91224f1-539c-406d-bc2f-75a081b94797", 00:14:53.236 "strip_size_kb": 0, 00:14:53.236 "state": "configuring", 00:14:53.236 "raid_level": "raid1", 00:14:53.236 "superblock": true, 00:14:53.236 "num_base_bdevs": 4, 00:14:53.236 "num_base_bdevs_discovered": 3, 00:14:53.236 "num_base_bdevs_operational": 4, 00:14:53.236 "base_bdevs_list": [ 00:14:53.236 { 00:14:53.236 "name": "BaseBdev1", 00:14:53.236 "uuid": "43b15bc1-26ac-4a2d-96ca-6709f48a3018", 00:14:53.236 "is_configured": true, 00:14:53.236 "data_offset": 2048, 00:14:53.236 "data_size": 63488 00:14:53.236 }, 00:14:53.236 { 00:14:53.236 "name": null, 00:14:53.236 "uuid": "a9aef3ed-abd3-4996-a82f-74b8bfc5b8eb", 00:14:53.236 "is_configured": false, 00:14:53.236 "data_offset": 0, 00:14:53.236 "data_size": 63488 00:14:53.236 }, 00:14:53.236 { 00:14:53.236 "name": "BaseBdev3", 00:14:53.236 "uuid": "1f38ee31-e40e-47d9-87d3-d9ff4e430aa4", 00:14:53.236 "is_configured": true, 00:14:53.236 "data_offset": 2048, 00:14:53.236 "data_size": 63488 00:14:53.236 }, 00:14:53.236 { 00:14:53.236 "name": "BaseBdev4", 00:14:53.236 "uuid": "333ba48f-fa1e-4289-b0a2-6c529fdce671", 00:14:53.236 "is_configured": true, 00:14:53.236 "data_offset": 2048, 00:14:53.236 "data_size": 63488 00:14:53.236 } 00:14:53.236 ] 00:14:53.236 }' 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.236 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.495 [2024-12-06 15:41:36.730159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.495 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.496 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.496 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.496 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.496 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.496 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.496 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.496 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.496 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.496 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.496 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.496 "name": "Existed_Raid", 00:14:53.496 "uuid": "d91224f1-539c-406d-bc2f-75a081b94797", 00:14:53.496 "strip_size_kb": 0, 00:14:53.496 "state": "configuring", 00:14:53.496 "raid_level": "raid1", 00:14:53.496 "superblock": true, 00:14:53.496 "num_base_bdevs": 4, 00:14:53.496 "num_base_bdevs_discovered": 2, 00:14:53.496 "num_base_bdevs_operational": 4, 00:14:53.496 "base_bdevs_list": [ 00:14:53.496 { 00:14:53.496 "name": "BaseBdev1", 00:14:53.496 "uuid": "43b15bc1-26ac-4a2d-96ca-6709f48a3018", 00:14:53.496 "is_configured": true, 00:14:53.496 "data_offset": 2048, 00:14:53.496 "data_size": 63488 00:14:53.496 }, 00:14:53.496 { 00:14:53.496 "name": null, 00:14:53.496 "uuid": "a9aef3ed-abd3-4996-a82f-74b8bfc5b8eb", 00:14:53.496 "is_configured": false, 00:14:53.496 "data_offset": 0, 00:14:53.496 "data_size": 63488 00:14:53.496 }, 00:14:53.496 { 00:14:53.496 "name": null, 00:14:53.496 "uuid": "1f38ee31-e40e-47d9-87d3-d9ff4e430aa4", 00:14:53.496 "is_configured": false, 00:14:53.496 "data_offset": 0, 00:14:53.496 "data_size": 63488 00:14:53.496 }, 00:14:53.496 { 00:14:53.496 "name": "BaseBdev4", 00:14:53.496 "uuid": "333ba48f-fa1e-4289-b0a2-6c529fdce671", 00:14:53.496 "is_configured": true, 00:14:53.496 "data_offset": 2048, 00:14:53.496 "data_size": 63488 00:14:53.496 } 00:14:53.496 ] 00:14:53.496 }' 00:14:53.496 15:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.496 15:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.065 [2024-12-06 15:41:37.225545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.065 "name": "Existed_Raid", 00:14:54.065 "uuid": "d91224f1-539c-406d-bc2f-75a081b94797", 00:14:54.065 "strip_size_kb": 0, 00:14:54.065 "state": "configuring", 00:14:54.065 "raid_level": "raid1", 00:14:54.065 "superblock": true, 00:14:54.065 "num_base_bdevs": 4, 00:14:54.065 "num_base_bdevs_discovered": 3, 00:14:54.065 "num_base_bdevs_operational": 4, 00:14:54.065 "base_bdevs_list": [ 00:14:54.065 { 00:14:54.065 "name": "BaseBdev1", 00:14:54.065 "uuid": "43b15bc1-26ac-4a2d-96ca-6709f48a3018", 00:14:54.065 "is_configured": true, 00:14:54.065 "data_offset": 2048, 00:14:54.065 "data_size": 63488 00:14:54.065 }, 00:14:54.065 { 00:14:54.065 "name": null, 00:14:54.065 "uuid": "a9aef3ed-abd3-4996-a82f-74b8bfc5b8eb", 00:14:54.065 "is_configured": false, 00:14:54.065 "data_offset": 0, 00:14:54.065 "data_size": 63488 00:14:54.065 }, 00:14:54.065 { 00:14:54.065 "name": "BaseBdev3", 00:14:54.065 "uuid": "1f38ee31-e40e-47d9-87d3-d9ff4e430aa4", 00:14:54.065 "is_configured": true, 00:14:54.065 "data_offset": 2048, 00:14:54.065 "data_size": 63488 00:14:54.065 }, 00:14:54.065 { 00:14:54.065 "name": "BaseBdev4", 00:14:54.065 "uuid": "333ba48f-fa1e-4289-b0a2-6c529fdce671", 00:14:54.065 "is_configured": true, 00:14:54.065 "data_offset": 2048, 00:14:54.065 "data_size": 63488 00:14:54.065 } 00:14:54.065 ] 00:14:54.065 }' 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.065 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.635 [2024-12-06 15:41:37.692922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.635 "name": "Existed_Raid", 00:14:54.635 "uuid": "d91224f1-539c-406d-bc2f-75a081b94797", 00:14:54.635 "strip_size_kb": 0, 00:14:54.635 "state": "configuring", 00:14:54.635 "raid_level": "raid1", 00:14:54.635 "superblock": true, 00:14:54.635 "num_base_bdevs": 4, 00:14:54.635 "num_base_bdevs_discovered": 2, 00:14:54.635 "num_base_bdevs_operational": 4, 00:14:54.635 "base_bdevs_list": [ 00:14:54.635 { 00:14:54.635 "name": null, 00:14:54.635 "uuid": "43b15bc1-26ac-4a2d-96ca-6709f48a3018", 00:14:54.635 "is_configured": false, 00:14:54.635 "data_offset": 0, 00:14:54.635 "data_size": 63488 00:14:54.635 }, 00:14:54.635 { 00:14:54.635 "name": null, 00:14:54.635 "uuid": "a9aef3ed-abd3-4996-a82f-74b8bfc5b8eb", 00:14:54.635 "is_configured": false, 00:14:54.635 "data_offset": 0, 00:14:54.635 "data_size": 63488 00:14:54.635 }, 00:14:54.635 { 00:14:54.635 "name": "BaseBdev3", 00:14:54.635 "uuid": "1f38ee31-e40e-47d9-87d3-d9ff4e430aa4", 00:14:54.635 "is_configured": true, 00:14:54.635 "data_offset": 2048, 00:14:54.635 "data_size": 63488 00:14:54.635 }, 00:14:54.635 { 00:14:54.635 "name": "BaseBdev4", 00:14:54.635 "uuid": "333ba48f-fa1e-4289-b0a2-6c529fdce671", 00:14:54.635 "is_configured": true, 00:14:54.635 "data_offset": 2048, 00:14:54.635 "data_size": 63488 00:14:54.635 } 00:14:54.635 ] 00:14:54.635 }' 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.635 15:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.204 [2024-12-06 15:41:38.269579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.204 "name": "Existed_Raid", 00:14:55.204 "uuid": "d91224f1-539c-406d-bc2f-75a081b94797", 00:14:55.204 "strip_size_kb": 0, 00:14:55.204 "state": "configuring", 00:14:55.204 "raid_level": "raid1", 00:14:55.204 "superblock": true, 00:14:55.204 "num_base_bdevs": 4, 00:14:55.204 "num_base_bdevs_discovered": 3, 00:14:55.204 "num_base_bdevs_operational": 4, 00:14:55.204 "base_bdevs_list": [ 00:14:55.204 { 00:14:55.204 "name": null, 00:14:55.204 "uuid": "43b15bc1-26ac-4a2d-96ca-6709f48a3018", 00:14:55.204 "is_configured": false, 00:14:55.204 "data_offset": 0, 00:14:55.204 "data_size": 63488 00:14:55.204 }, 00:14:55.204 { 00:14:55.204 "name": "BaseBdev2", 00:14:55.204 "uuid": "a9aef3ed-abd3-4996-a82f-74b8bfc5b8eb", 00:14:55.204 "is_configured": true, 00:14:55.204 "data_offset": 2048, 00:14:55.204 "data_size": 63488 00:14:55.204 }, 00:14:55.204 { 00:14:55.204 "name": "BaseBdev3", 00:14:55.204 "uuid": "1f38ee31-e40e-47d9-87d3-d9ff4e430aa4", 00:14:55.204 "is_configured": true, 00:14:55.204 "data_offset": 2048, 00:14:55.204 "data_size": 63488 00:14:55.204 }, 00:14:55.204 { 00:14:55.204 "name": "BaseBdev4", 00:14:55.204 "uuid": "333ba48f-fa1e-4289-b0a2-6c529fdce671", 00:14:55.204 "is_configured": true, 00:14:55.204 "data_offset": 2048, 00:14:55.204 "data_size": 63488 00:14:55.204 } 00:14:55.204 ] 00:14:55.204 }' 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.204 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.462 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.462 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:55.462 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.462 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.462 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.462 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:55.462 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.462 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.462 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.462 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:55.462 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 43b15bc1-26ac-4a2d-96ca-6709f48a3018 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.721 [2024-12-06 15:41:38.806247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:55.721 [2024-12-06 15:41:38.806538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:55.721 [2024-12-06 15:41:38.806560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:55.721 [2024-12-06 15:41:38.806894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:55.721 [2024-12-06 15:41:38.807064] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:55.721 [2024-12-06 15:41:38.807074] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:55.721 NewBaseBdev 00:14:55.721 [2024-12-06 15:41:38.807229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.721 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.721 [ 00:14:55.721 { 00:14:55.721 "name": "NewBaseBdev", 00:14:55.721 "aliases": [ 00:14:55.721 "43b15bc1-26ac-4a2d-96ca-6709f48a3018" 00:14:55.721 ], 00:14:55.721 "product_name": "Malloc disk", 00:14:55.721 "block_size": 512, 00:14:55.721 "num_blocks": 65536, 00:14:55.721 "uuid": "43b15bc1-26ac-4a2d-96ca-6709f48a3018", 00:14:55.721 "assigned_rate_limits": { 00:14:55.721 "rw_ios_per_sec": 0, 00:14:55.721 "rw_mbytes_per_sec": 0, 00:14:55.721 "r_mbytes_per_sec": 0, 00:14:55.721 "w_mbytes_per_sec": 0 00:14:55.721 }, 00:14:55.721 "claimed": true, 00:14:55.721 "claim_type": "exclusive_write", 00:14:55.721 "zoned": false, 00:14:55.721 "supported_io_types": { 00:14:55.721 "read": true, 00:14:55.721 "write": true, 00:14:55.721 "unmap": true, 00:14:55.721 "flush": true, 00:14:55.721 "reset": true, 00:14:55.721 "nvme_admin": false, 00:14:55.721 "nvme_io": false, 00:14:55.721 "nvme_io_md": false, 00:14:55.721 "write_zeroes": true, 00:14:55.721 "zcopy": true, 00:14:55.721 "get_zone_info": false, 00:14:55.721 "zone_management": false, 00:14:55.721 "zone_append": false, 00:14:55.722 "compare": false, 00:14:55.722 "compare_and_write": false, 00:14:55.722 "abort": true, 00:14:55.722 "seek_hole": false, 00:14:55.722 "seek_data": false, 00:14:55.722 "copy": true, 00:14:55.722 "nvme_iov_md": false 00:14:55.722 }, 00:14:55.722 "memory_domains": [ 00:14:55.722 { 00:14:55.722 "dma_device_id": "system", 00:14:55.722 "dma_device_type": 1 00:14:55.722 }, 00:14:55.722 { 00:14:55.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.722 "dma_device_type": 2 00:14:55.722 } 00:14:55.722 ], 00:14:55.722 "driver_specific": {} 00:14:55.722 } 00:14:55.722 ] 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.722 "name": "Existed_Raid", 00:14:55.722 "uuid": "d91224f1-539c-406d-bc2f-75a081b94797", 00:14:55.722 "strip_size_kb": 0, 00:14:55.722 "state": "online", 00:14:55.722 "raid_level": "raid1", 00:14:55.722 "superblock": true, 00:14:55.722 "num_base_bdevs": 4, 00:14:55.722 "num_base_bdevs_discovered": 4, 00:14:55.722 "num_base_bdevs_operational": 4, 00:14:55.722 "base_bdevs_list": [ 00:14:55.722 { 00:14:55.722 "name": "NewBaseBdev", 00:14:55.722 "uuid": "43b15bc1-26ac-4a2d-96ca-6709f48a3018", 00:14:55.722 "is_configured": true, 00:14:55.722 "data_offset": 2048, 00:14:55.722 "data_size": 63488 00:14:55.722 }, 00:14:55.722 { 00:14:55.722 "name": "BaseBdev2", 00:14:55.722 "uuid": "a9aef3ed-abd3-4996-a82f-74b8bfc5b8eb", 00:14:55.722 "is_configured": true, 00:14:55.722 "data_offset": 2048, 00:14:55.722 "data_size": 63488 00:14:55.722 }, 00:14:55.722 { 00:14:55.722 "name": "BaseBdev3", 00:14:55.722 "uuid": "1f38ee31-e40e-47d9-87d3-d9ff4e430aa4", 00:14:55.722 "is_configured": true, 00:14:55.722 "data_offset": 2048, 00:14:55.722 "data_size": 63488 00:14:55.722 }, 00:14:55.722 { 00:14:55.722 "name": "BaseBdev4", 00:14:55.722 "uuid": "333ba48f-fa1e-4289-b0a2-6c529fdce671", 00:14:55.722 "is_configured": true, 00:14:55.722 "data_offset": 2048, 00:14:55.722 "data_size": 63488 00:14:55.722 } 00:14:55.722 ] 00:14:55.722 }' 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.722 15:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.980 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:55.980 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:55.980 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:55.980 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:55.980 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:55.980 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:55.980 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:55.980 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.980 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.980 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:55.980 [2024-12-06 15:41:39.270173] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.238 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.238 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:56.238 "name": "Existed_Raid", 00:14:56.238 "aliases": [ 00:14:56.238 "d91224f1-539c-406d-bc2f-75a081b94797" 00:14:56.238 ], 00:14:56.238 "product_name": "Raid Volume", 00:14:56.238 "block_size": 512, 00:14:56.238 "num_blocks": 63488, 00:14:56.238 "uuid": "d91224f1-539c-406d-bc2f-75a081b94797", 00:14:56.238 "assigned_rate_limits": { 00:14:56.238 "rw_ios_per_sec": 0, 00:14:56.238 "rw_mbytes_per_sec": 0, 00:14:56.239 "r_mbytes_per_sec": 0, 00:14:56.239 "w_mbytes_per_sec": 0 00:14:56.239 }, 00:14:56.239 "claimed": false, 00:14:56.239 "zoned": false, 00:14:56.239 "supported_io_types": { 00:14:56.239 "read": true, 00:14:56.239 "write": true, 00:14:56.239 "unmap": false, 00:14:56.239 "flush": false, 00:14:56.239 "reset": true, 00:14:56.239 "nvme_admin": false, 00:14:56.239 "nvme_io": false, 00:14:56.239 "nvme_io_md": false, 00:14:56.239 "write_zeroes": true, 00:14:56.239 "zcopy": false, 00:14:56.239 "get_zone_info": false, 00:14:56.239 "zone_management": false, 00:14:56.239 "zone_append": false, 00:14:56.239 "compare": false, 00:14:56.239 "compare_and_write": false, 00:14:56.239 "abort": false, 00:14:56.239 "seek_hole": false, 00:14:56.239 "seek_data": false, 00:14:56.239 "copy": false, 00:14:56.239 "nvme_iov_md": false 00:14:56.239 }, 00:14:56.239 "memory_domains": [ 00:14:56.239 { 00:14:56.239 "dma_device_id": "system", 00:14:56.239 "dma_device_type": 1 00:14:56.239 }, 00:14:56.239 { 00:14:56.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.239 "dma_device_type": 2 00:14:56.239 }, 00:14:56.239 { 00:14:56.239 "dma_device_id": "system", 00:14:56.239 "dma_device_type": 1 00:14:56.239 }, 00:14:56.239 { 00:14:56.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.239 "dma_device_type": 2 00:14:56.239 }, 00:14:56.239 { 00:14:56.239 "dma_device_id": "system", 00:14:56.239 "dma_device_type": 1 00:14:56.239 }, 00:14:56.239 { 00:14:56.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.239 "dma_device_type": 2 00:14:56.239 }, 00:14:56.239 { 00:14:56.239 "dma_device_id": "system", 00:14:56.239 "dma_device_type": 1 00:14:56.239 }, 00:14:56.239 { 00:14:56.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.239 "dma_device_type": 2 00:14:56.239 } 00:14:56.239 ], 00:14:56.239 "driver_specific": { 00:14:56.239 "raid": { 00:14:56.239 "uuid": "d91224f1-539c-406d-bc2f-75a081b94797", 00:14:56.239 "strip_size_kb": 0, 00:14:56.239 "state": "online", 00:14:56.239 "raid_level": "raid1", 00:14:56.239 "superblock": true, 00:14:56.239 "num_base_bdevs": 4, 00:14:56.239 "num_base_bdevs_discovered": 4, 00:14:56.239 "num_base_bdevs_operational": 4, 00:14:56.239 "base_bdevs_list": [ 00:14:56.239 { 00:14:56.239 "name": "NewBaseBdev", 00:14:56.239 "uuid": "43b15bc1-26ac-4a2d-96ca-6709f48a3018", 00:14:56.239 "is_configured": true, 00:14:56.239 "data_offset": 2048, 00:14:56.239 "data_size": 63488 00:14:56.239 }, 00:14:56.239 { 00:14:56.239 "name": "BaseBdev2", 00:14:56.239 "uuid": "a9aef3ed-abd3-4996-a82f-74b8bfc5b8eb", 00:14:56.239 "is_configured": true, 00:14:56.239 "data_offset": 2048, 00:14:56.239 "data_size": 63488 00:14:56.239 }, 00:14:56.239 { 00:14:56.239 "name": "BaseBdev3", 00:14:56.239 "uuid": "1f38ee31-e40e-47d9-87d3-d9ff4e430aa4", 00:14:56.239 "is_configured": true, 00:14:56.239 "data_offset": 2048, 00:14:56.239 "data_size": 63488 00:14:56.239 }, 00:14:56.239 { 00:14:56.239 "name": "BaseBdev4", 00:14:56.239 "uuid": "333ba48f-fa1e-4289-b0a2-6c529fdce671", 00:14:56.239 "is_configured": true, 00:14:56.239 "data_offset": 2048, 00:14:56.239 "data_size": 63488 00:14:56.239 } 00:14:56.239 ] 00:14:56.239 } 00:14:56.239 } 00:14:56.239 }' 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:56.239 BaseBdev2 00:14:56.239 BaseBdev3 00:14:56.239 BaseBdev4' 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.239 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.498 [2024-12-06 15:41:39.589622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.498 [2024-12-06 15:41:39.589767] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.498 [2024-12-06 15:41:39.589881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.498 [2024-12-06 15:41:39.590244] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.498 [2024-12-06 15:41:39.590263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73883 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73883 ']' 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73883 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73883 00:14:56.498 killing process with pid 73883 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73883' 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73883 00:14:56.498 [2024-12-06 15:41:39.646050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.498 15:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73883 00:14:57.064 [2024-12-06 15:41:40.089375] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.457 15:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:58.457 00:14:58.457 real 0m11.600s 00:14:58.457 user 0m17.994s 00:14:58.457 sys 0m2.532s 00:14:58.457 15:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.457 ************************************ 00:14:58.457 END TEST raid_state_function_test_sb 00:14:58.457 ************************************ 00:14:58.457 15:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.457 15:41:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:14:58.457 15:41:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:58.457 15:41:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.457 15:41:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.457 ************************************ 00:14:58.457 START TEST raid_superblock_test 00:14:58.457 ************************************ 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74553 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74553 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74553 ']' 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.457 15:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.457 [2024-12-06 15:41:41.536019] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:14:58.457 [2024-12-06 15:41:41.536349] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74553 ] 00:14:58.457 [2024-12-06 15:41:41.722976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.718 [2024-12-06 15:41:41.852494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.975 [2024-12-06 15:41:42.076248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.975 [2024-12-06 15:41:42.076432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.234 malloc1 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.234 [2024-12-06 15:41:42.407902] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:59.234 [2024-12-06 15:41:42.407973] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.234 [2024-12-06 15:41:42.408002] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:59.234 [2024-12-06 15:41:42.408014] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.234 [2024-12-06 15:41:42.410758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.234 [2024-12-06 15:41:42.410931] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:59.234 pt1 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.234 malloc2 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.234 [2024-12-06 15:41:42.473791] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:59.234 [2024-12-06 15:41:42.473969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.234 [2024-12-06 15:41:42.474037] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:59.234 [2024-12-06 15:41:42.474136] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.234 [2024-12-06 15:41:42.476893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.234 [2024-12-06 15:41:42.477026] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:59.234 pt2 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.234 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.493 malloc3 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.493 [2024-12-06 15:41:42.548603] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:59.493 [2024-12-06 15:41:42.548786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.493 [2024-12-06 15:41:42.548823] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:59.493 [2024-12-06 15:41:42.548836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.493 [2024-12-06 15:41:42.551543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.493 [2024-12-06 15:41:42.551581] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:59.493 pt3 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.493 malloc4 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.493 [2024-12-06 15:41:42.614311] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:59.493 [2024-12-06 15:41:42.614483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.493 [2024-12-06 15:41:42.614531] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:59.493 [2024-12-06 15:41:42.614544] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.493 [2024-12-06 15:41:42.617209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.493 [2024-12-06 15:41:42.617250] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:59.493 pt4 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.493 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.493 [2024-12-06 15:41:42.626340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:59.493 [2024-12-06 15:41:42.628703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:59.493 [2024-12-06 15:41:42.628886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:59.493 [2024-12-06 15:41:42.628963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:59.493 [2024-12-06 15:41:42.629160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:59.493 [2024-12-06 15:41:42.629177] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:59.493 [2024-12-06 15:41:42.629453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:59.493 [2024-12-06 15:41:42.629666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:59.494 [2024-12-06 15:41:42.629686] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:59.494 [2024-12-06 15:41:42.629838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.494 "name": "raid_bdev1", 00:14:59.494 "uuid": "7eb83c72-18a3-463a-8fe4-a4ee00f253ee", 00:14:59.494 "strip_size_kb": 0, 00:14:59.494 "state": "online", 00:14:59.494 "raid_level": "raid1", 00:14:59.494 "superblock": true, 00:14:59.494 "num_base_bdevs": 4, 00:14:59.494 "num_base_bdevs_discovered": 4, 00:14:59.494 "num_base_bdevs_operational": 4, 00:14:59.494 "base_bdevs_list": [ 00:14:59.494 { 00:14:59.494 "name": "pt1", 00:14:59.494 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:59.494 "is_configured": true, 00:14:59.494 "data_offset": 2048, 00:14:59.494 "data_size": 63488 00:14:59.494 }, 00:14:59.494 { 00:14:59.494 "name": "pt2", 00:14:59.494 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.494 "is_configured": true, 00:14:59.494 "data_offset": 2048, 00:14:59.494 "data_size": 63488 00:14:59.494 }, 00:14:59.494 { 00:14:59.494 "name": "pt3", 00:14:59.494 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:59.494 "is_configured": true, 00:14:59.494 "data_offset": 2048, 00:14:59.494 "data_size": 63488 00:14:59.494 }, 00:14:59.494 { 00:14:59.494 "name": "pt4", 00:14:59.494 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:59.494 "is_configured": true, 00:14:59.494 "data_offset": 2048, 00:14:59.494 "data_size": 63488 00:14:59.494 } 00:14:59.494 ] 00:14:59.494 }' 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.494 15:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.753 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:59.753 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:59.753 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.753 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.753 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.753 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:59.753 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.753 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.753 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.753 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:59.753 [2024-12-06 15:41:43.018603] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.753 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:00.012 "name": "raid_bdev1", 00:15:00.012 "aliases": [ 00:15:00.012 "7eb83c72-18a3-463a-8fe4-a4ee00f253ee" 00:15:00.012 ], 00:15:00.012 "product_name": "Raid Volume", 00:15:00.012 "block_size": 512, 00:15:00.012 "num_blocks": 63488, 00:15:00.012 "uuid": "7eb83c72-18a3-463a-8fe4-a4ee00f253ee", 00:15:00.012 "assigned_rate_limits": { 00:15:00.012 "rw_ios_per_sec": 0, 00:15:00.012 "rw_mbytes_per_sec": 0, 00:15:00.012 "r_mbytes_per_sec": 0, 00:15:00.012 "w_mbytes_per_sec": 0 00:15:00.012 }, 00:15:00.012 "claimed": false, 00:15:00.012 "zoned": false, 00:15:00.012 "supported_io_types": { 00:15:00.012 "read": true, 00:15:00.012 "write": true, 00:15:00.012 "unmap": false, 00:15:00.012 "flush": false, 00:15:00.012 "reset": true, 00:15:00.012 "nvme_admin": false, 00:15:00.012 "nvme_io": false, 00:15:00.012 "nvme_io_md": false, 00:15:00.012 "write_zeroes": true, 00:15:00.012 "zcopy": false, 00:15:00.012 "get_zone_info": false, 00:15:00.012 "zone_management": false, 00:15:00.012 "zone_append": false, 00:15:00.012 "compare": false, 00:15:00.012 "compare_and_write": false, 00:15:00.012 "abort": false, 00:15:00.012 "seek_hole": false, 00:15:00.012 "seek_data": false, 00:15:00.012 "copy": false, 00:15:00.012 "nvme_iov_md": false 00:15:00.012 }, 00:15:00.012 "memory_domains": [ 00:15:00.012 { 00:15:00.012 "dma_device_id": "system", 00:15:00.012 "dma_device_type": 1 00:15:00.012 }, 00:15:00.012 { 00:15:00.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.012 "dma_device_type": 2 00:15:00.012 }, 00:15:00.012 { 00:15:00.012 "dma_device_id": "system", 00:15:00.012 "dma_device_type": 1 00:15:00.012 }, 00:15:00.012 { 00:15:00.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.012 "dma_device_type": 2 00:15:00.012 }, 00:15:00.012 { 00:15:00.012 "dma_device_id": "system", 00:15:00.012 "dma_device_type": 1 00:15:00.012 }, 00:15:00.012 { 00:15:00.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.012 "dma_device_type": 2 00:15:00.012 }, 00:15:00.012 { 00:15:00.012 "dma_device_id": "system", 00:15:00.012 "dma_device_type": 1 00:15:00.012 }, 00:15:00.012 { 00:15:00.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.012 "dma_device_type": 2 00:15:00.012 } 00:15:00.012 ], 00:15:00.012 "driver_specific": { 00:15:00.012 "raid": { 00:15:00.012 "uuid": "7eb83c72-18a3-463a-8fe4-a4ee00f253ee", 00:15:00.012 "strip_size_kb": 0, 00:15:00.012 "state": "online", 00:15:00.012 "raid_level": "raid1", 00:15:00.012 "superblock": true, 00:15:00.012 "num_base_bdevs": 4, 00:15:00.012 "num_base_bdevs_discovered": 4, 00:15:00.012 "num_base_bdevs_operational": 4, 00:15:00.012 "base_bdevs_list": [ 00:15:00.012 { 00:15:00.012 "name": "pt1", 00:15:00.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.012 "is_configured": true, 00:15:00.012 "data_offset": 2048, 00:15:00.012 "data_size": 63488 00:15:00.012 }, 00:15:00.012 { 00:15:00.012 "name": "pt2", 00:15:00.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.012 "is_configured": true, 00:15:00.012 "data_offset": 2048, 00:15:00.012 "data_size": 63488 00:15:00.012 }, 00:15:00.012 { 00:15:00.012 "name": "pt3", 00:15:00.012 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.012 "is_configured": true, 00:15:00.012 "data_offset": 2048, 00:15:00.012 "data_size": 63488 00:15:00.012 }, 00:15:00.012 { 00:15:00.012 "name": "pt4", 00:15:00.012 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:00.012 "is_configured": true, 00:15:00.012 "data_offset": 2048, 00:15:00.012 "data_size": 63488 00:15:00.012 } 00:15:00.012 ] 00:15:00.012 } 00:15:00.012 } 00:15:00.012 }' 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:00.012 pt2 00:15:00.012 pt3 00:15:00.012 pt4' 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.012 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:00.013 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.013 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.013 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.013 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.013 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.013 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.013 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.013 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:00.013 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.013 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:00.272 [2024-12-06 15:41:43.354207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7eb83c72-18a3-463a-8fe4-a4ee00f253ee 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7eb83c72-18a3-463a-8fe4-a4ee00f253ee ']' 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.272 [2024-12-06 15:41:43.397860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.272 [2024-12-06 15:41:43.397892] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.272 [2024-12-06 15:41:43.397979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.272 [2024-12-06 15:41:43.398072] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.272 [2024-12-06 15:41:43.398091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.272 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.273 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.532 [2024-12-06 15:41:43.565669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:00.532 [2024-12-06 15:41:43.568099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:00.532 [2024-12-06 15:41:43.568153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:00.532 [2024-12-06 15:41:43.568192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:00.532 [2024-12-06 15:41:43.568248] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:00.532 [2024-12-06 15:41:43.568311] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:00.532 [2024-12-06 15:41:43.568333] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:00.533 [2024-12-06 15:41:43.568357] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:00.533 [2024-12-06 15:41:43.568374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.533 [2024-12-06 15:41:43.568389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:00.533 request: 00:15:00.533 { 00:15:00.533 "name": "raid_bdev1", 00:15:00.533 "raid_level": "raid1", 00:15:00.533 "base_bdevs": [ 00:15:00.533 "malloc1", 00:15:00.533 "malloc2", 00:15:00.533 "malloc3", 00:15:00.533 "malloc4" 00:15:00.533 ], 00:15:00.533 "superblock": false, 00:15:00.533 "method": "bdev_raid_create", 00:15:00.533 "req_id": 1 00:15:00.533 } 00:15:00.533 Got JSON-RPC error response 00:15:00.533 response: 00:15:00.533 { 00:15:00.533 "code": -17, 00:15:00.533 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:00.533 } 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.533 [2024-12-06 15:41:43.629666] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:00.533 [2024-12-06 15:41:43.629725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.533 [2024-12-06 15:41:43.629745] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:00.533 [2024-12-06 15:41:43.629760] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.533 [2024-12-06 15:41:43.632534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.533 [2024-12-06 15:41:43.632679] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:00.533 [2024-12-06 15:41:43.632785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:00.533 [2024-12-06 15:41:43.632858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:00.533 pt1 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.533 "name": "raid_bdev1", 00:15:00.533 "uuid": "7eb83c72-18a3-463a-8fe4-a4ee00f253ee", 00:15:00.533 "strip_size_kb": 0, 00:15:00.533 "state": "configuring", 00:15:00.533 "raid_level": "raid1", 00:15:00.533 "superblock": true, 00:15:00.533 "num_base_bdevs": 4, 00:15:00.533 "num_base_bdevs_discovered": 1, 00:15:00.533 "num_base_bdevs_operational": 4, 00:15:00.533 "base_bdevs_list": [ 00:15:00.533 { 00:15:00.533 "name": "pt1", 00:15:00.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.533 "is_configured": true, 00:15:00.533 "data_offset": 2048, 00:15:00.533 "data_size": 63488 00:15:00.533 }, 00:15:00.533 { 00:15:00.533 "name": null, 00:15:00.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.533 "is_configured": false, 00:15:00.533 "data_offset": 2048, 00:15:00.533 "data_size": 63488 00:15:00.533 }, 00:15:00.533 { 00:15:00.533 "name": null, 00:15:00.533 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.533 "is_configured": false, 00:15:00.533 "data_offset": 2048, 00:15:00.533 "data_size": 63488 00:15:00.533 }, 00:15:00.533 { 00:15:00.533 "name": null, 00:15:00.533 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:00.533 "is_configured": false, 00:15:00.533 "data_offset": 2048, 00:15:00.533 "data_size": 63488 00:15:00.533 } 00:15:00.533 ] 00:15:00.533 }' 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.533 15:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.792 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:00.792 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:00.792 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.792 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.792 [2024-12-06 15:41:44.053642] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:00.792 [2024-12-06 15:41:44.053713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.792 [2024-12-06 15:41:44.053737] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:00.792 [2024-12-06 15:41:44.053752] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.792 [2024-12-06 15:41:44.054253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.792 [2024-12-06 15:41:44.054276] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:00.792 [2024-12-06 15:41:44.054357] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:00.792 [2024-12-06 15:41:44.054385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:00.792 pt2 00:15:00.792 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.792 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:00.792 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.792 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.792 [2024-12-06 15:41:44.061652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:00.792 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.792 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:00.792 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.793 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.793 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.793 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.793 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.793 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.793 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.793 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.793 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.793 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.793 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.793 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.793 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.051 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.051 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.051 "name": "raid_bdev1", 00:15:01.052 "uuid": "7eb83c72-18a3-463a-8fe4-a4ee00f253ee", 00:15:01.052 "strip_size_kb": 0, 00:15:01.052 "state": "configuring", 00:15:01.052 "raid_level": "raid1", 00:15:01.052 "superblock": true, 00:15:01.052 "num_base_bdevs": 4, 00:15:01.052 "num_base_bdevs_discovered": 1, 00:15:01.052 "num_base_bdevs_operational": 4, 00:15:01.052 "base_bdevs_list": [ 00:15:01.052 { 00:15:01.052 "name": "pt1", 00:15:01.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.052 "is_configured": true, 00:15:01.052 "data_offset": 2048, 00:15:01.052 "data_size": 63488 00:15:01.052 }, 00:15:01.052 { 00:15:01.052 "name": null, 00:15:01.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.052 "is_configured": false, 00:15:01.052 "data_offset": 0, 00:15:01.052 "data_size": 63488 00:15:01.052 }, 00:15:01.052 { 00:15:01.052 "name": null, 00:15:01.052 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.052 "is_configured": false, 00:15:01.052 "data_offset": 2048, 00:15:01.052 "data_size": 63488 00:15:01.052 }, 00:15:01.052 { 00:15:01.052 "name": null, 00:15:01.052 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:01.052 "is_configured": false, 00:15:01.052 "data_offset": 2048, 00:15:01.052 "data_size": 63488 00:15:01.052 } 00:15:01.052 ] 00:15:01.052 }' 00:15:01.052 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.052 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.311 [2024-12-06 15:41:44.529094] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.311 [2024-12-06 15:41:44.529171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.311 [2024-12-06 15:41:44.529199] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:01.311 [2024-12-06 15:41:44.529212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.311 [2024-12-06 15:41:44.529795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.311 [2024-12-06 15:41:44.529817] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.311 [2024-12-06 15:41:44.529916] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:01.311 [2024-12-06 15:41:44.529943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.311 pt2 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.311 [2024-12-06 15:41:44.541049] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:01.311 [2024-12-06 15:41:44.541226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.311 [2024-12-06 15:41:44.541260] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:01.311 [2024-12-06 15:41:44.541271] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.311 [2024-12-06 15:41:44.541732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.311 [2024-12-06 15:41:44.541751] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:01.311 [2024-12-06 15:41:44.541826] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:01.311 [2024-12-06 15:41:44.541847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:01.311 pt3 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.311 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.311 [2024-12-06 15:41:44.553001] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:01.311 [2024-12-06 15:41:44.553049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.311 [2024-12-06 15:41:44.553071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:01.311 [2024-12-06 15:41:44.553082] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.311 [2024-12-06 15:41:44.553498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.311 [2024-12-06 15:41:44.553540] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:01.311 [2024-12-06 15:41:44.553609] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:01.311 [2024-12-06 15:41:44.553642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:01.311 [2024-12-06 15:41:44.553811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:01.311 [2024-12-06 15:41:44.553822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:01.311 [2024-12-06 15:41:44.554103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:01.311 [2024-12-06 15:41:44.554275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:01.312 [2024-12-06 15:41:44.554291] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:01.312 [2024-12-06 15:41:44.554441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.312 pt4 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.312 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.571 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.571 "name": "raid_bdev1", 00:15:01.571 "uuid": "7eb83c72-18a3-463a-8fe4-a4ee00f253ee", 00:15:01.571 "strip_size_kb": 0, 00:15:01.571 "state": "online", 00:15:01.571 "raid_level": "raid1", 00:15:01.571 "superblock": true, 00:15:01.571 "num_base_bdevs": 4, 00:15:01.571 "num_base_bdevs_discovered": 4, 00:15:01.571 "num_base_bdevs_operational": 4, 00:15:01.571 "base_bdevs_list": [ 00:15:01.571 { 00:15:01.571 "name": "pt1", 00:15:01.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.571 "is_configured": true, 00:15:01.571 "data_offset": 2048, 00:15:01.571 "data_size": 63488 00:15:01.571 }, 00:15:01.571 { 00:15:01.571 "name": "pt2", 00:15:01.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.571 "is_configured": true, 00:15:01.571 "data_offset": 2048, 00:15:01.571 "data_size": 63488 00:15:01.571 }, 00:15:01.571 { 00:15:01.571 "name": "pt3", 00:15:01.571 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.571 "is_configured": true, 00:15:01.571 "data_offset": 2048, 00:15:01.571 "data_size": 63488 00:15:01.571 }, 00:15:01.571 { 00:15:01.571 "name": "pt4", 00:15:01.571 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:01.571 "is_configured": true, 00:15:01.571 "data_offset": 2048, 00:15:01.571 "data_size": 63488 00:15:01.571 } 00:15:01.571 ] 00:15:01.571 }' 00:15:01.571 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.571 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.831 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:01.831 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:01.831 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:01.831 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:01.831 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:01.831 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:01.831 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:01.831 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.831 15:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:01.831 15:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.831 [2024-12-06 15:41:44.988800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.831 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.831 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:01.831 "name": "raid_bdev1", 00:15:01.831 "aliases": [ 00:15:01.831 "7eb83c72-18a3-463a-8fe4-a4ee00f253ee" 00:15:01.831 ], 00:15:01.831 "product_name": "Raid Volume", 00:15:01.831 "block_size": 512, 00:15:01.831 "num_blocks": 63488, 00:15:01.831 "uuid": "7eb83c72-18a3-463a-8fe4-a4ee00f253ee", 00:15:01.831 "assigned_rate_limits": { 00:15:01.831 "rw_ios_per_sec": 0, 00:15:01.831 "rw_mbytes_per_sec": 0, 00:15:01.831 "r_mbytes_per_sec": 0, 00:15:01.831 "w_mbytes_per_sec": 0 00:15:01.831 }, 00:15:01.831 "claimed": false, 00:15:01.831 "zoned": false, 00:15:01.831 "supported_io_types": { 00:15:01.831 "read": true, 00:15:01.831 "write": true, 00:15:01.831 "unmap": false, 00:15:01.831 "flush": false, 00:15:01.831 "reset": true, 00:15:01.831 "nvme_admin": false, 00:15:01.831 "nvme_io": false, 00:15:01.831 "nvme_io_md": false, 00:15:01.831 "write_zeroes": true, 00:15:01.831 "zcopy": false, 00:15:01.831 "get_zone_info": false, 00:15:01.831 "zone_management": false, 00:15:01.831 "zone_append": false, 00:15:01.831 "compare": false, 00:15:01.831 "compare_and_write": false, 00:15:01.831 "abort": false, 00:15:01.831 "seek_hole": false, 00:15:01.831 "seek_data": false, 00:15:01.831 "copy": false, 00:15:01.831 "nvme_iov_md": false 00:15:01.831 }, 00:15:01.831 "memory_domains": [ 00:15:01.831 { 00:15:01.831 "dma_device_id": "system", 00:15:01.831 "dma_device_type": 1 00:15:01.831 }, 00:15:01.831 { 00:15:01.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.831 "dma_device_type": 2 00:15:01.831 }, 00:15:01.831 { 00:15:01.831 "dma_device_id": "system", 00:15:01.831 "dma_device_type": 1 00:15:01.831 }, 00:15:01.831 { 00:15:01.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.831 "dma_device_type": 2 00:15:01.831 }, 00:15:01.831 { 00:15:01.831 "dma_device_id": "system", 00:15:01.831 "dma_device_type": 1 00:15:01.831 }, 00:15:01.831 { 00:15:01.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.831 "dma_device_type": 2 00:15:01.831 }, 00:15:01.831 { 00:15:01.831 "dma_device_id": "system", 00:15:01.831 "dma_device_type": 1 00:15:01.831 }, 00:15:01.831 { 00:15:01.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.831 "dma_device_type": 2 00:15:01.831 } 00:15:01.831 ], 00:15:01.831 "driver_specific": { 00:15:01.831 "raid": { 00:15:01.831 "uuid": "7eb83c72-18a3-463a-8fe4-a4ee00f253ee", 00:15:01.831 "strip_size_kb": 0, 00:15:01.831 "state": "online", 00:15:01.831 "raid_level": "raid1", 00:15:01.831 "superblock": true, 00:15:01.831 "num_base_bdevs": 4, 00:15:01.831 "num_base_bdevs_discovered": 4, 00:15:01.831 "num_base_bdevs_operational": 4, 00:15:01.831 "base_bdevs_list": [ 00:15:01.831 { 00:15:01.831 "name": "pt1", 00:15:01.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.831 "is_configured": true, 00:15:01.831 "data_offset": 2048, 00:15:01.831 "data_size": 63488 00:15:01.831 }, 00:15:01.831 { 00:15:01.831 "name": "pt2", 00:15:01.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.831 "is_configured": true, 00:15:01.831 "data_offset": 2048, 00:15:01.831 "data_size": 63488 00:15:01.831 }, 00:15:01.831 { 00:15:01.831 "name": "pt3", 00:15:01.831 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.831 "is_configured": true, 00:15:01.831 "data_offset": 2048, 00:15:01.831 "data_size": 63488 00:15:01.831 }, 00:15:01.831 { 00:15:01.831 "name": "pt4", 00:15:01.831 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:01.831 "is_configured": true, 00:15:01.831 "data_offset": 2048, 00:15:01.831 "data_size": 63488 00:15:01.831 } 00:15:01.831 ] 00:15:01.831 } 00:15:01.831 } 00:15:01.831 }' 00:15:01.831 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:01.831 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:01.831 pt2 00:15:01.831 pt3 00:15:01.831 pt4' 00:15:01.831 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.831 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:01.831 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.831 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.831 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:01.831 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.831 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.831 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.089 [2024-12-06 15:41:45.280294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7eb83c72-18a3-463a-8fe4-a4ee00f253ee '!=' 7eb83c72-18a3-463a-8fe4-a4ee00f253ee ']' 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.089 [2024-12-06 15:41:45.323975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.089 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.090 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.090 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.090 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.090 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.090 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.090 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.090 "name": "raid_bdev1", 00:15:02.090 "uuid": "7eb83c72-18a3-463a-8fe4-a4ee00f253ee", 00:15:02.090 "strip_size_kb": 0, 00:15:02.090 "state": "online", 00:15:02.090 "raid_level": "raid1", 00:15:02.090 "superblock": true, 00:15:02.090 "num_base_bdevs": 4, 00:15:02.090 "num_base_bdevs_discovered": 3, 00:15:02.090 "num_base_bdevs_operational": 3, 00:15:02.090 "base_bdevs_list": [ 00:15:02.090 { 00:15:02.090 "name": null, 00:15:02.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.090 "is_configured": false, 00:15:02.090 "data_offset": 0, 00:15:02.090 "data_size": 63488 00:15:02.090 }, 00:15:02.090 { 00:15:02.090 "name": "pt2", 00:15:02.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.090 "is_configured": true, 00:15:02.090 "data_offset": 2048, 00:15:02.090 "data_size": 63488 00:15:02.090 }, 00:15:02.090 { 00:15:02.090 "name": "pt3", 00:15:02.090 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.090 "is_configured": true, 00:15:02.090 "data_offset": 2048, 00:15:02.090 "data_size": 63488 00:15:02.090 }, 00:15:02.090 { 00:15:02.090 "name": "pt4", 00:15:02.090 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:02.090 "is_configured": true, 00:15:02.090 "data_offset": 2048, 00:15:02.090 "data_size": 63488 00:15:02.090 } 00:15:02.090 ] 00:15:02.090 }' 00:15:02.090 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.090 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.656 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:02.656 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.656 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.656 [2024-12-06 15:41:45.727444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.657 [2024-12-06 15:41:45.727483] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.657 [2024-12-06 15:41:45.727606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.657 [2024-12-06 15:41:45.727700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.657 [2024-12-06 15:41:45.727712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.657 [2024-12-06 15:41:45.831274] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:02.657 [2024-12-06 15:41:45.831336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.657 [2024-12-06 15:41:45.831360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:02.657 [2024-12-06 15:41:45.831373] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.657 [2024-12-06 15:41:45.834224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.657 [2024-12-06 15:41:45.834380] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:02.657 [2024-12-06 15:41:45.834495] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:02.657 [2024-12-06 15:41:45.834574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.657 pt2 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.657 "name": "raid_bdev1", 00:15:02.657 "uuid": "7eb83c72-18a3-463a-8fe4-a4ee00f253ee", 00:15:02.657 "strip_size_kb": 0, 00:15:02.657 "state": "configuring", 00:15:02.657 "raid_level": "raid1", 00:15:02.657 "superblock": true, 00:15:02.657 "num_base_bdevs": 4, 00:15:02.657 "num_base_bdevs_discovered": 1, 00:15:02.657 "num_base_bdevs_operational": 3, 00:15:02.657 "base_bdevs_list": [ 00:15:02.657 { 00:15:02.657 "name": null, 00:15:02.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.657 "is_configured": false, 00:15:02.657 "data_offset": 2048, 00:15:02.657 "data_size": 63488 00:15:02.657 }, 00:15:02.657 { 00:15:02.657 "name": "pt2", 00:15:02.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.657 "is_configured": true, 00:15:02.657 "data_offset": 2048, 00:15:02.657 "data_size": 63488 00:15:02.657 }, 00:15:02.657 { 00:15:02.657 "name": null, 00:15:02.657 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.657 "is_configured": false, 00:15:02.657 "data_offset": 2048, 00:15:02.657 "data_size": 63488 00:15:02.657 }, 00:15:02.657 { 00:15:02.657 "name": null, 00:15:02.657 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:02.657 "is_configured": false, 00:15:02.657 "data_offset": 2048, 00:15:02.657 "data_size": 63488 00:15:02.657 } 00:15:02.657 ] 00:15:02.657 }' 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.657 15:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.221 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:03.221 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:03.221 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:03.221 15:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.221 15:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.221 [2024-12-06 15:41:46.294694] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:03.221 [2024-12-06 15:41:46.294917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.221 [2024-12-06 15:41:46.294959] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:03.221 [2024-12-06 15:41:46.294973] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.222 [2024-12-06 15:41:46.295571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.222 [2024-12-06 15:41:46.295595] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:03.222 [2024-12-06 15:41:46.295705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:03.222 [2024-12-06 15:41:46.295732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:03.222 pt3 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.222 "name": "raid_bdev1", 00:15:03.222 "uuid": "7eb83c72-18a3-463a-8fe4-a4ee00f253ee", 00:15:03.222 "strip_size_kb": 0, 00:15:03.222 "state": "configuring", 00:15:03.222 "raid_level": "raid1", 00:15:03.222 "superblock": true, 00:15:03.222 "num_base_bdevs": 4, 00:15:03.222 "num_base_bdevs_discovered": 2, 00:15:03.222 "num_base_bdevs_operational": 3, 00:15:03.222 "base_bdevs_list": [ 00:15:03.222 { 00:15:03.222 "name": null, 00:15:03.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.222 "is_configured": false, 00:15:03.222 "data_offset": 2048, 00:15:03.222 "data_size": 63488 00:15:03.222 }, 00:15:03.222 { 00:15:03.222 "name": "pt2", 00:15:03.222 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.222 "is_configured": true, 00:15:03.222 "data_offset": 2048, 00:15:03.222 "data_size": 63488 00:15:03.222 }, 00:15:03.222 { 00:15:03.222 "name": "pt3", 00:15:03.222 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.222 "is_configured": true, 00:15:03.222 "data_offset": 2048, 00:15:03.222 "data_size": 63488 00:15:03.222 }, 00:15:03.222 { 00:15:03.222 "name": null, 00:15:03.222 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:03.222 "is_configured": false, 00:15:03.222 "data_offset": 2048, 00:15:03.222 "data_size": 63488 00:15:03.222 } 00:15:03.222 ] 00:15:03.222 }' 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.222 15:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.482 [2024-12-06 15:41:46.718398] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:03.482 [2024-12-06 15:41:46.718645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.482 [2024-12-06 15:41:46.718723] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:03.482 [2024-12-06 15:41:46.718907] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.482 [2024-12-06 15:41:46.719544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.482 [2024-12-06 15:41:46.719568] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:03.482 [2024-12-06 15:41:46.719689] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:03.482 [2024-12-06 15:41:46.719719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:03.482 [2024-12-06 15:41:46.719883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:03.482 [2024-12-06 15:41:46.719894] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:03.482 [2024-12-06 15:41:46.720185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:03.482 [2024-12-06 15:41:46.720376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:03.482 [2024-12-06 15:41:46.720392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:03.482 [2024-12-06 15:41:46.720569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.482 pt4 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.482 15:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.741 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.741 "name": "raid_bdev1", 00:15:03.741 "uuid": "7eb83c72-18a3-463a-8fe4-a4ee00f253ee", 00:15:03.741 "strip_size_kb": 0, 00:15:03.741 "state": "online", 00:15:03.741 "raid_level": "raid1", 00:15:03.741 "superblock": true, 00:15:03.741 "num_base_bdevs": 4, 00:15:03.741 "num_base_bdevs_discovered": 3, 00:15:03.741 "num_base_bdevs_operational": 3, 00:15:03.741 "base_bdevs_list": [ 00:15:03.741 { 00:15:03.741 "name": null, 00:15:03.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.741 "is_configured": false, 00:15:03.741 "data_offset": 2048, 00:15:03.741 "data_size": 63488 00:15:03.741 }, 00:15:03.741 { 00:15:03.741 "name": "pt2", 00:15:03.741 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.741 "is_configured": true, 00:15:03.741 "data_offset": 2048, 00:15:03.741 "data_size": 63488 00:15:03.741 }, 00:15:03.741 { 00:15:03.741 "name": "pt3", 00:15:03.741 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.741 "is_configured": true, 00:15:03.741 "data_offset": 2048, 00:15:03.741 "data_size": 63488 00:15:03.741 }, 00:15:03.741 { 00:15:03.741 "name": "pt4", 00:15:03.741 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:03.741 "is_configured": true, 00:15:03.741 "data_offset": 2048, 00:15:03.741 "data_size": 63488 00:15:03.741 } 00:15:03.741 ] 00:15:03.741 }' 00:15:03.741 15:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.741 15:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.000 [2024-12-06 15:41:47.161803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.000 [2024-12-06 15:41:47.161841] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.000 [2024-12-06 15:41:47.161949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.000 [2024-12-06 15:41:47.162043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.000 [2024-12-06 15:41:47.162061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.000 [2024-12-06 15:41:47.237670] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:04.000 [2024-12-06 15:41:47.237751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.000 [2024-12-06 15:41:47.237775] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:04.000 [2024-12-06 15:41:47.237794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.000 [2024-12-06 15:41:47.241063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.000 [2024-12-06 15:41:47.241112] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:04.000 [2024-12-06 15:41:47.241217] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:04.000 [2024-12-06 15:41:47.241284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:04.000 [2024-12-06 15:41:47.241468] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:04.000 [2024-12-06 15:41:47.241487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.000 [2024-12-06 15:41:47.241519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:04.000 [2024-12-06 15:41:47.241597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:04.000 [2024-12-06 15:41:47.241716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:04.000 pt1 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.000 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.259 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.259 "name": "raid_bdev1", 00:15:04.259 "uuid": "7eb83c72-18a3-463a-8fe4-a4ee00f253ee", 00:15:04.259 "strip_size_kb": 0, 00:15:04.259 "state": "configuring", 00:15:04.259 "raid_level": "raid1", 00:15:04.259 "superblock": true, 00:15:04.259 "num_base_bdevs": 4, 00:15:04.259 "num_base_bdevs_discovered": 2, 00:15:04.259 "num_base_bdevs_operational": 3, 00:15:04.259 "base_bdevs_list": [ 00:15:04.259 { 00:15:04.259 "name": null, 00:15:04.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.259 "is_configured": false, 00:15:04.259 "data_offset": 2048, 00:15:04.259 "data_size": 63488 00:15:04.259 }, 00:15:04.259 { 00:15:04.259 "name": "pt2", 00:15:04.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.259 "is_configured": true, 00:15:04.259 "data_offset": 2048, 00:15:04.259 "data_size": 63488 00:15:04.259 }, 00:15:04.259 { 00:15:04.259 "name": "pt3", 00:15:04.259 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.259 "is_configured": true, 00:15:04.259 "data_offset": 2048, 00:15:04.259 "data_size": 63488 00:15:04.259 }, 00:15:04.259 { 00:15:04.259 "name": null, 00:15:04.259 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.259 "is_configured": false, 00:15:04.259 "data_offset": 2048, 00:15:04.259 "data_size": 63488 00:15:04.259 } 00:15:04.259 ] 00:15:04.259 }' 00:15:04.259 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.259 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.518 [2024-12-06 15:41:47.757668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:04.518 [2024-12-06 15:41:47.757751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.518 [2024-12-06 15:41:47.757782] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:04.518 [2024-12-06 15:41:47.757796] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.518 [2024-12-06 15:41:47.758387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.518 [2024-12-06 15:41:47.758411] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:04.518 [2024-12-06 15:41:47.758530] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:04.518 [2024-12-06 15:41:47.758561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:04.518 [2024-12-06 15:41:47.758730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:04.518 [2024-12-06 15:41:47.758741] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:04.518 [2024-12-06 15:41:47.759064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:04.518 [2024-12-06 15:41:47.759233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:04.518 [2024-12-06 15:41:47.759248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:04.518 [2024-12-06 15:41:47.759412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.518 pt4 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.518 "name": "raid_bdev1", 00:15:04.518 "uuid": "7eb83c72-18a3-463a-8fe4-a4ee00f253ee", 00:15:04.518 "strip_size_kb": 0, 00:15:04.518 "state": "online", 00:15:04.518 "raid_level": "raid1", 00:15:04.518 "superblock": true, 00:15:04.518 "num_base_bdevs": 4, 00:15:04.518 "num_base_bdevs_discovered": 3, 00:15:04.518 "num_base_bdevs_operational": 3, 00:15:04.518 "base_bdevs_list": [ 00:15:04.518 { 00:15:04.518 "name": null, 00:15:04.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.518 "is_configured": false, 00:15:04.518 "data_offset": 2048, 00:15:04.518 "data_size": 63488 00:15:04.518 }, 00:15:04.518 { 00:15:04.518 "name": "pt2", 00:15:04.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.518 "is_configured": true, 00:15:04.518 "data_offset": 2048, 00:15:04.518 "data_size": 63488 00:15:04.518 }, 00:15:04.518 { 00:15:04.518 "name": "pt3", 00:15:04.518 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.518 "is_configured": true, 00:15:04.518 "data_offset": 2048, 00:15:04.518 "data_size": 63488 00:15:04.518 }, 00:15:04.518 { 00:15:04.518 "name": "pt4", 00:15:04.518 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.518 "is_configured": true, 00:15:04.518 "data_offset": 2048, 00:15:04.518 "data_size": 63488 00:15:04.518 } 00:15:04.518 ] 00:15:04.518 }' 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.518 15:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.086 [2024-12-06 15:41:48.257995] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7eb83c72-18a3-463a-8fe4-a4ee00f253ee '!=' 7eb83c72-18a3-463a-8fe4-a4ee00f253ee ']' 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74553 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74553 ']' 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74553 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74553 00:15:05.086 killing process with pid 74553 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74553' 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74553 00:15:05.086 [2024-12-06 15:41:48.337455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.086 [2024-12-06 15:41:48.337597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.086 15:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74553 00:15:05.086 [2024-12-06 15:41:48.337695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.086 [2024-12-06 15:41:48.337713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:05.653 [2024-12-06 15:41:48.813172] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.026 15:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:07.026 00:15:07.026 real 0m8.707s 00:15:07.026 user 0m13.397s 00:15:07.026 sys 0m1.876s 00:15:07.026 ************************************ 00:15:07.026 END TEST raid_superblock_test 00:15:07.026 ************************************ 00:15:07.026 15:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.026 15:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.026 15:41:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:15:07.026 15:41:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:07.027 15:41:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.027 15:41:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:07.027 ************************************ 00:15:07.027 START TEST raid_read_error_test 00:15:07.027 ************************************ 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.APLavCdpzB 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75046 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75046 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75046 ']' 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.027 15:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.286 [2024-12-06 15:41:50.344775] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:15:07.286 [2024-12-06 15:41:50.345155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75046 ] 00:15:07.286 [2024-12-06 15:41:50.534454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.544 [2024-12-06 15:41:50.680558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.802 [2024-12-06 15:41:50.949864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.802 [2024-12-06 15:41:50.950133] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.061 BaseBdev1_malloc 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.061 true 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.061 [2024-12-06 15:41:51.339367] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:08.061 [2024-12-06 15:41:51.339438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.061 [2024-12-06 15:41:51.339466] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:08.061 [2024-12-06 15:41:51.339482] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.061 [2024-12-06 15:41:51.342308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.061 [2024-12-06 15:41:51.342356] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:08.061 BaseBdev1 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.061 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.321 BaseBdev2_malloc 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.321 true 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.321 [2024-12-06 15:41:51.421185] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:08.321 [2024-12-06 15:41:51.421372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.321 [2024-12-06 15:41:51.421428] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:08.321 [2024-12-06 15:41:51.421521] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.321 [2024-12-06 15:41:51.424366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.321 [2024-12-06 15:41:51.424412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:08.321 BaseBdev2 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.321 BaseBdev3_malloc 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.321 true 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.321 [2024-12-06 15:41:51.513115] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:08.321 [2024-12-06 15:41:51.513285] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.321 [2024-12-06 15:41:51.513342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:08.321 [2024-12-06 15:41:51.513428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.321 [2024-12-06 15:41:51.516373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.321 [2024-12-06 15:41:51.516543] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:08.321 BaseBdev3 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.321 BaseBdev4_malloc 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.321 true 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.321 [2024-12-06 15:41:51.595091] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:08.321 [2024-12-06 15:41:51.595266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.321 [2024-12-06 15:41:51.595324] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:08.321 [2024-12-06 15:41:51.595420] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.321 [2024-12-06 15:41:51.598301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.321 [2024-12-06 15:41:51.598455] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:08.321 BaseBdev4 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.321 [2024-12-06 15:41:51.607158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.321 [2024-12-06 15:41:51.609765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.321 [2024-12-06 15:41:51.609986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:08.321 [2024-12-06 15:41:51.610113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:08.321 [2024-12-06 15:41:51.610428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:08.321 [2024-12-06 15:41:51.610448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:08.321 [2024-12-06 15:41:51.610788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:08.321 [2024-12-06 15:41:51.611005] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:08.321 [2024-12-06 15:41:51.611018] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:08.321 [2024-12-06 15:41:51.611256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.321 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.580 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.580 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.580 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.580 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.580 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.580 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.580 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.580 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.580 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.580 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.580 "name": "raid_bdev1", 00:15:08.580 "uuid": "45da5c9e-e701-4700-b32f-15de34de52dd", 00:15:08.580 "strip_size_kb": 0, 00:15:08.580 "state": "online", 00:15:08.580 "raid_level": "raid1", 00:15:08.580 "superblock": true, 00:15:08.580 "num_base_bdevs": 4, 00:15:08.580 "num_base_bdevs_discovered": 4, 00:15:08.580 "num_base_bdevs_operational": 4, 00:15:08.580 "base_bdevs_list": [ 00:15:08.580 { 00:15:08.580 "name": "BaseBdev1", 00:15:08.580 "uuid": "2455398f-e741-588c-ba4d-18a9e6f3e178", 00:15:08.580 "is_configured": true, 00:15:08.580 "data_offset": 2048, 00:15:08.580 "data_size": 63488 00:15:08.580 }, 00:15:08.580 { 00:15:08.580 "name": "BaseBdev2", 00:15:08.580 "uuid": "fa5a7b0b-35c0-5509-9af8-89a9a288aecc", 00:15:08.580 "is_configured": true, 00:15:08.580 "data_offset": 2048, 00:15:08.580 "data_size": 63488 00:15:08.580 }, 00:15:08.580 { 00:15:08.580 "name": "BaseBdev3", 00:15:08.581 "uuid": "ea90345b-c2ec-5a76-b224-67528a71041d", 00:15:08.581 "is_configured": true, 00:15:08.581 "data_offset": 2048, 00:15:08.581 "data_size": 63488 00:15:08.581 }, 00:15:08.581 { 00:15:08.581 "name": "BaseBdev4", 00:15:08.581 "uuid": "ac420edc-9230-5323-b748-b96d971cda8d", 00:15:08.581 "is_configured": true, 00:15:08.581 "data_offset": 2048, 00:15:08.581 "data_size": 63488 00:15:08.581 } 00:15:08.581 ] 00:15:08.581 }' 00:15:08.581 15:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.581 15:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.839 15:41:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:08.839 15:41:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:09.097 [2024-12-06 15:41:52.179987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.035 "name": "raid_bdev1", 00:15:10.035 "uuid": "45da5c9e-e701-4700-b32f-15de34de52dd", 00:15:10.035 "strip_size_kb": 0, 00:15:10.035 "state": "online", 00:15:10.035 "raid_level": "raid1", 00:15:10.035 "superblock": true, 00:15:10.035 "num_base_bdevs": 4, 00:15:10.035 "num_base_bdevs_discovered": 4, 00:15:10.035 "num_base_bdevs_operational": 4, 00:15:10.035 "base_bdevs_list": [ 00:15:10.035 { 00:15:10.035 "name": "BaseBdev1", 00:15:10.035 "uuid": "2455398f-e741-588c-ba4d-18a9e6f3e178", 00:15:10.035 "is_configured": true, 00:15:10.035 "data_offset": 2048, 00:15:10.035 "data_size": 63488 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "name": "BaseBdev2", 00:15:10.035 "uuid": "fa5a7b0b-35c0-5509-9af8-89a9a288aecc", 00:15:10.035 "is_configured": true, 00:15:10.035 "data_offset": 2048, 00:15:10.035 "data_size": 63488 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "name": "BaseBdev3", 00:15:10.035 "uuid": "ea90345b-c2ec-5a76-b224-67528a71041d", 00:15:10.035 "is_configured": true, 00:15:10.035 "data_offset": 2048, 00:15:10.035 "data_size": 63488 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "name": "BaseBdev4", 00:15:10.035 "uuid": "ac420edc-9230-5323-b748-b96d971cda8d", 00:15:10.035 "is_configured": true, 00:15:10.035 "data_offset": 2048, 00:15:10.035 "data_size": 63488 00:15:10.035 } 00:15:10.035 ] 00:15:10.035 }' 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.035 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.294 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:10.294 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.294 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.294 [2024-12-06 15:41:53.574928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:10.294 [2024-12-06 15:41:53.574971] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:10.294 { 00:15:10.294 "results": [ 00:15:10.294 { 00:15:10.294 "job": "raid_bdev1", 00:15:10.294 "core_mask": "0x1", 00:15:10.294 "workload": "randrw", 00:15:10.294 "percentage": 50, 00:15:10.294 "status": "finished", 00:15:10.294 "queue_depth": 1, 00:15:10.294 "io_size": 131072, 00:15:10.294 "runtime": 1.3946, 00:15:10.294 "iops": 7954.252115301879, 00:15:10.294 "mibps": 994.2815144127349, 00:15:10.294 "io_failed": 0, 00:15:10.294 "io_timeout": 0, 00:15:10.294 "avg_latency_us": 122.73612542661408, 00:15:10.294 "min_latency_us": 27.142168674698794, 00:15:10.294 "max_latency_us": 1546.2811244979919 00:15:10.294 } 00:15:10.294 ], 00:15:10.294 "core_count": 1 00:15:10.294 } 00:15:10.294 [2024-12-06 15:41:53.577998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.294 [2024-12-06 15:41:53.578080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.294 [2024-12-06 15:41:53.578236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.294 [2024-12-06 15:41:53.578255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:10.294 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.294 15:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75046 00:15:10.294 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75046 ']' 00:15:10.294 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75046 00:15:10.294 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:10.554 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.554 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75046 00:15:10.554 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:10.554 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:10.554 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75046' 00:15:10.554 killing process with pid 75046 00:15:10.554 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75046 00:15:10.554 [2024-12-06 15:41:53.636634] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.554 15:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75046 00:15:10.813 [2024-12-06 15:41:54.025494] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:12.203 15:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.APLavCdpzB 00:15:12.203 15:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:12.203 15:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:12.203 15:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:12.203 15:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:12.203 15:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:12.203 15:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:12.203 15:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:12.203 00:15:12.203 real 0m5.227s 00:15:12.203 user 0m6.025s 00:15:12.203 sys 0m0.851s 00:15:12.203 15:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.203 15:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.203 ************************************ 00:15:12.203 END TEST raid_read_error_test 00:15:12.203 ************************************ 00:15:12.461 15:41:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:15:12.461 15:41:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:12.461 15:41:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.461 15:41:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:12.461 ************************************ 00:15:12.461 START TEST raid_write_error_test 00:15:12.461 ************************************ 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:12.461 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:12.462 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:12.462 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:12.462 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9QEp3Saqrm 00:15:12.462 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75201 00:15:12.462 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75201 00:15:12.462 15:41:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75201 ']' 00:15:12.462 15:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:12.462 15:41:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.462 15:41:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.462 15:41:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.462 15:41:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.462 15:41:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.462 [2024-12-06 15:41:55.667533] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:15:12.462 [2024-12-06 15:41:55.667720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75201 ] 00:15:12.720 [2024-12-06 15:41:55.865548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.990 [2024-12-06 15:41:56.019233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.250 [2024-12-06 15:41:56.277719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.250 [2024-12-06 15:41:56.277803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.250 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.250 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:13.250 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:13.250 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:13.250 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.250 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.510 BaseBdev1_malloc 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.510 true 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.510 [2024-12-06 15:41:56.608316] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:13.510 [2024-12-06 15:41:56.608525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.510 [2024-12-06 15:41:56.608588] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:13.510 [2024-12-06 15:41:56.608680] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.510 [2024-12-06 15:41:56.611600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.510 BaseBdev1 00:15:13.510 [2024-12-06 15:41:56.611769] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.510 BaseBdev2_malloc 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.510 true 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.510 [2024-12-06 15:41:56.686164] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:13.510 [2024-12-06 15:41:56.686228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.510 [2024-12-06 15:41:56.686249] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:13.510 [2024-12-06 15:41:56.686264] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.510 [2024-12-06 15:41:56.689140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.510 [2024-12-06 15:41:56.689187] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:13.510 BaseBdev2 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.510 BaseBdev3_malloc 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.510 true 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.510 [2024-12-06 15:41:56.776070] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:13.510 [2024-12-06 15:41:56.776257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.510 [2024-12-06 15:41:56.776290] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:13.510 [2024-12-06 15:41:56.776307] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.510 [2024-12-06 15:41:56.779174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.510 [2024-12-06 15:41:56.779220] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:13.510 BaseBdev3 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.510 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:13.511 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:13.511 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.511 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.771 BaseBdev4_malloc 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.771 true 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.771 [2024-12-06 15:41:56.853230] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:13.771 [2024-12-06 15:41:56.853292] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.771 [2024-12-06 15:41:56.853315] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:13.771 [2024-12-06 15:41:56.853337] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.771 [2024-12-06 15:41:56.856164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.771 [2024-12-06 15:41:56.856210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:13.771 BaseBdev4 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.771 [2024-12-06 15:41:56.865279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.771 [2024-12-06 15:41:56.868244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.771 [2024-12-06 15:41:56.868464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:13.771 [2024-12-06 15:41:56.868680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:13.771 [2024-12-06 15:41:56.869038] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:13.771 [2024-12-06 15:41:56.869166] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:13.771 [2024-12-06 15:41:56.869538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:13.771 [2024-12-06 15:41:56.869925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:13.771 [2024-12-06 15:41:56.869948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:13.771 [2024-12-06 15:41:56.870240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.771 "name": "raid_bdev1", 00:15:13.771 "uuid": "70cd247f-2347-4837-bd0a-54864749d7a0", 00:15:13.771 "strip_size_kb": 0, 00:15:13.771 "state": "online", 00:15:13.771 "raid_level": "raid1", 00:15:13.771 "superblock": true, 00:15:13.771 "num_base_bdevs": 4, 00:15:13.771 "num_base_bdevs_discovered": 4, 00:15:13.771 "num_base_bdevs_operational": 4, 00:15:13.771 "base_bdevs_list": [ 00:15:13.771 { 00:15:13.771 "name": "BaseBdev1", 00:15:13.771 "uuid": "15f70e2a-8688-52db-9278-4d91c30b89dc", 00:15:13.771 "is_configured": true, 00:15:13.771 "data_offset": 2048, 00:15:13.771 "data_size": 63488 00:15:13.771 }, 00:15:13.771 { 00:15:13.771 "name": "BaseBdev2", 00:15:13.771 "uuid": "2b08b5a9-b9c3-50ad-9d1d-adcf2013651d", 00:15:13.771 "is_configured": true, 00:15:13.771 "data_offset": 2048, 00:15:13.771 "data_size": 63488 00:15:13.771 }, 00:15:13.771 { 00:15:13.771 "name": "BaseBdev3", 00:15:13.771 "uuid": "b727ed66-db8c-549d-a530-0a437b6539da", 00:15:13.771 "is_configured": true, 00:15:13.771 "data_offset": 2048, 00:15:13.771 "data_size": 63488 00:15:13.771 }, 00:15:13.771 { 00:15:13.771 "name": "BaseBdev4", 00:15:13.771 "uuid": "74332166-99e8-58c3-ad1d-0981bb04875c", 00:15:13.771 "is_configured": true, 00:15:13.771 "data_offset": 2048, 00:15:13.771 "data_size": 63488 00:15:13.771 } 00:15:13.771 ] 00:15:13.771 }' 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.771 15:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.031 15:41:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:14.031 15:41:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:14.289 [2024-12-06 15:41:57.391152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.227 [2024-12-06 15:41:58.312114] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:15:15.227 [2024-12-06 15:41:58.312187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:15.227 [2024-12-06 15:41:58.312453] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.227 "name": "raid_bdev1", 00:15:15.227 "uuid": "70cd247f-2347-4837-bd0a-54864749d7a0", 00:15:15.227 "strip_size_kb": 0, 00:15:15.227 "state": "online", 00:15:15.227 "raid_level": "raid1", 00:15:15.227 "superblock": true, 00:15:15.227 "num_base_bdevs": 4, 00:15:15.227 "num_base_bdevs_discovered": 3, 00:15:15.227 "num_base_bdevs_operational": 3, 00:15:15.227 "base_bdevs_list": [ 00:15:15.227 { 00:15:15.227 "name": null, 00:15:15.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.227 "is_configured": false, 00:15:15.227 "data_offset": 0, 00:15:15.227 "data_size": 63488 00:15:15.227 }, 00:15:15.227 { 00:15:15.227 "name": "BaseBdev2", 00:15:15.227 "uuid": "2b08b5a9-b9c3-50ad-9d1d-adcf2013651d", 00:15:15.227 "is_configured": true, 00:15:15.227 "data_offset": 2048, 00:15:15.227 "data_size": 63488 00:15:15.227 }, 00:15:15.227 { 00:15:15.227 "name": "BaseBdev3", 00:15:15.227 "uuid": "b727ed66-db8c-549d-a530-0a437b6539da", 00:15:15.227 "is_configured": true, 00:15:15.227 "data_offset": 2048, 00:15:15.227 "data_size": 63488 00:15:15.227 }, 00:15:15.227 { 00:15:15.227 "name": "BaseBdev4", 00:15:15.227 "uuid": "74332166-99e8-58c3-ad1d-0981bb04875c", 00:15:15.227 "is_configured": true, 00:15:15.227 "data_offset": 2048, 00:15:15.227 "data_size": 63488 00:15:15.227 } 00:15:15.227 ] 00:15:15.227 }' 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.227 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:15.487 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 [2024-12-06 15:41:58.738661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:15.487 [2024-12-06 15:41:58.738698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:15.487 [2024-12-06 15:41:58.741386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.487 [2024-12-06 15:41:58.741443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.487 [2024-12-06 15:41:58.741576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.487 [2024-12-06 15:41:58.741594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:15.487 { 00:15:15.487 "results": [ 00:15:15.487 { 00:15:15.487 "job": "raid_bdev1", 00:15:15.487 "core_mask": "0x1", 00:15:15.487 "workload": "randrw", 00:15:15.487 "percentage": 50, 00:15:15.487 "status": "finished", 00:15:15.487 "queue_depth": 1, 00:15:15.487 "io_size": 131072, 00:15:15.487 "runtime": 1.347065, 00:15:15.487 "iops": 8915.67964426364, 00:15:15.487 "mibps": 1114.459955532955, 00:15:15.487 "io_failed": 0, 00:15:15.487 "io_timeout": 0, 00:15:15.487 "avg_latency_us": 109.4512762791382, 00:15:15.487 "min_latency_us": 24.469076305220884, 00:15:15.487 "max_latency_us": 1506.8016064257029 00:15:15.487 } 00:15:15.487 ], 00:15:15.487 "core_count": 1 00:15:15.487 } 00:15:15.487 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 15:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75201 00:15:15.487 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75201 ']' 00:15:15.487 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75201 00:15:15.487 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:15.487 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.487 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75201 00:15:15.746 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.746 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.746 killing process with pid 75201 00:15:15.746 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75201' 00:15:15.746 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75201 00:15:15.746 [2024-12-06 15:41:58.795803] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:15.746 15:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75201 00:15:16.005 [2024-12-06 15:41:59.152648] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.385 15:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9QEp3Saqrm 00:15:17.385 15:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:17.385 15:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:17.385 15:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:17.385 15:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:17.385 15:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:17.385 15:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:17.385 ************************************ 00:15:17.385 END TEST raid_write_error_test 00:15:17.386 ************************************ 00:15:17.386 15:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:17.386 00:15:17.386 real 0m4.957s 00:15:17.386 user 0m5.637s 00:15:17.386 sys 0m0.825s 00:15:17.386 15:42:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.386 15:42:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.386 15:42:00 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:15:17.386 15:42:00 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:17.386 15:42:00 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:15:17.386 15:42:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:17.386 15:42:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.386 15:42:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.386 ************************************ 00:15:17.386 START TEST raid_rebuild_test 00:15:17.386 ************************************ 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75346 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75346 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75346 ']' 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.386 15:42:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.645 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:17.645 Zero copy mechanism will not be used. 00:15:17.645 [2024-12-06 15:42:00.692664] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:15:17.645 [2024-12-06 15:42:00.692830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75346 ] 00:15:17.645 [2024-12-06 15:42:00.879181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.905 [2024-12-06 15:42:01.015788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.165 [2024-12-06 15:42:01.263820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.165 [2024-12-06 15:42:01.264033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.423 BaseBdev1_malloc 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.423 [2024-12-06 15:42:01.636209] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:18.423 [2024-12-06 15:42:01.636429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.423 [2024-12-06 15:42:01.636496] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:18.423 [2024-12-06 15:42:01.636605] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.423 [2024-12-06 15:42:01.639314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.423 [2024-12-06 15:42:01.639492] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:18.423 BaseBdev1 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.423 BaseBdev2_malloc 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.423 [2024-12-06 15:42:01.699624] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:18.423 [2024-12-06 15:42:01.699809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.423 [2024-12-06 15:42:01.699875] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:18.423 [2024-12-06 15:42:01.699958] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.423 [2024-12-06 15:42:01.702713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.423 [2024-12-06 15:42:01.702855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:18.423 BaseBdev2 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.423 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.698 spare_malloc 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.698 spare_delay 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.698 [2024-12-06 15:42:01.791961] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:18.698 [2024-12-06 15:42:01.792173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.698 [2024-12-06 15:42:01.792241] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:18.698 [2024-12-06 15:42:01.792350] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.698 [2024-12-06 15:42:01.795555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.698 [2024-12-06 15:42:01.795712] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:18.698 spare 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.698 [2024-12-06 15:42:01.804031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.698 [2024-12-06 15:42:01.806559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.698 [2024-12-06 15:42:01.806780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:18.698 [2024-12-06 15:42:01.806804] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:18.698 [2024-12-06 15:42:01.807087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:18.698 [2024-12-06 15:42:01.807284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:18.698 [2024-12-06 15:42:01.807299] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:18.698 [2024-12-06 15:42:01.807463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.698 "name": "raid_bdev1", 00:15:18.698 "uuid": "0cac6929-5c79-4144-a8f0-0b15c33e365d", 00:15:18.698 "strip_size_kb": 0, 00:15:18.698 "state": "online", 00:15:18.698 "raid_level": "raid1", 00:15:18.698 "superblock": false, 00:15:18.698 "num_base_bdevs": 2, 00:15:18.698 "num_base_bdevs_discovered": 2, 00:15:18.698 "num_base_bdevs_operational": 2, 00:15:18.698 "base_bdevs_list": [ 00:15:18.698 { 00:15:18.698 "name": "BaseBdev1", 00:15:18.698 "uuid": "3c0aa268-f7a9-5cb5-8a87-4c0d94be4c40", 00:15:18.698 "is_configured": true, 00:15:18.698 "data_offset": 0, 00:15:18.698 "data_size": 65536 00:15:18.698 }, 00:15:18.698 { 00:15:18.698 "name": "BaseBdev2", 00:15:18.698 "uuid": "77b2d12d-688e-5035-9122-621a4dcb488c", 00:15:18.698 "is_configured": true, 00:15:18.698 "data_offset": 0, 00:15:18.698 "data_size": 65536 00:15:18.698 } 00:15:18.698 ] 00:15:18.698 }' 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.698 15:42:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.265 [2024-12-06 15:42:02.280055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.265 15:42:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:19.528 [2024-12-06 15:42:02.591843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:19.528 /dev/nbd0 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.528 1+0 records in 00:15:19.528 1+0 records out 00:15:19.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697562 s, 5.9 MB/s 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:19.528 15:42:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:24.806 65536+0 records in 00:15:24.806 65536+0 records out 00:15:24.806 33554432 bytes (34 MB, 32 MiB) copied, 4.91576 s, 6.8 MB/s 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:24.806 [2024-12-06 15:42:07.842135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.806 [2024-12-06 15:42:07.858871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.806 "name": "raid_bdev1", 00:15:24.806 "uuid": "0cac6929-5c79-4144-a8f0-0b15c33e365d", 00:15:24.806 "strip_size_kb": 0, 00:15:24.806 "state": "online", 00:15:24.806 "raid_level": "raid1", 00:15:24.806 "superblock": false, 00:15:24.806 "num_base_bdevs": 2, 00:15:24.806 "num_base_bdevs_discovered": 1, 00:15:24.806 "num_base_bdevs_operational": 1, 00:15:24.806 "base_bdevs_list": [ 00:15:24.806 { 00:15:24.806 "name": null, 00:15:24.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.806 "is_configured": false, 00:15:24.806 "data_offset": 0, 00:15:24.806 "data_size": 65536 00:15:24.806 }, 00:15:24.806 { 00:15:24.806 "name": "BaseBdev2", 00:15:24.806 "uuid": "77b2d12d-688e-5035-9122-621a4dcb488c", 00:15:24.806 "is_configured": true, 00:15:24.806 "data_offset": 0, 00:15:24.806 "data_size": 65536 00:15:24.806 } 00:15:24.806 ] 00:15:24.806 }' 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.806 15:42:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.066 15:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:25.066 15:42:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.066 15:42:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.066 [2024-12-06 15:42:08.278367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:25.066 [2024-12-06 15:42:08.299622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:15:25.066 15:42:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.066 15:42:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:25.066 [2024-12-06 15:42:08.302190] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.445 "name": "raid_bdev1", 00:15:26.445 "uuid": "0cac6929-5c79-4144-a8f0-0b15c33e365d", 00:15:26.445 "strip_size_kb": 0, 00:15:26.445 "state": "online", 00:15:26.445 "raid_level": "raid1", 00:15:26.445 "superblock": false, 00:15:26.445 "num_base_bdevs": 2, 00:15:26.445 "num_base_bdevs_discovered": 2, 00:15:26.445 "num_base_bdevs_operational": 2, 00:15:26.445 "process": { 00:15:26.445 "type": "rebuild", 00:15:26.445 "target": "spare", 00:15:26.445 "progress": { 00:15:26.445 "blocks": 20480, 00:15:26.445 "percent": 31 00:15:26.445 } 00:15:26.445 }, 00:15:26.445 "base_bdevs_list": [ 00:15:26.445 { 00:15:26.445 "name": "spare", 00:15:26.445 "uuid": "f1ed2730-794d-52c6-8a90-2bbde5fb466e", 00:15:26.445 "is_configured": true, 00:15:26.445 "data_offset": 0, 00:15:26.445 "data_size": 65536 00:15:26.445 }, 00:15:26.445 { 00:15:26.445 "name": "BaseBdev2", 00:15:26.445 "uuid": "77b2d12d-688e-5035-9122-621a4dcb488c", 00:15:26.445 "is_configured": true, 00:15:26.445 "data_offset": 0, 00:15:26.445 "data_size": 65536 00:15:26.445 } 00:15:26.445 ] 00:15:26.445 }' 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.445 [2024-12-06 15:42:09.438296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:26.445 [2024-12-06 15:42:09.511888] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:26.445 [2024-12-06 15:42:09.512132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.445 [2024-12-06 15:42:09.512156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:26.445 [2024-12-06 15:42:09.512171] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.445 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.446 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.446 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.446 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.446 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.446 15:42:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.446 15:42:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.446 15:42:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.446 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.446 "name": "raid_bdev1", 00:15:26.446 "uuid": "0cac6929-5c79-4144-a8f0-0b15c33e365d", 00:15:26.446 "strip_size_kb": 0, 00:15:26.446 "state": "online", 00:15:26.446 "raid_level": "raid1", 00:15:26.446 "superblock": false, 00:15:26.446 "num_base_bdevs": 2, 00:15:26.446 "num_base_bdevs_discovered": 1, 00:15:26.446 "num_base_bdevs_operational": 1, 00:15:26.446 "base_bdevs_list": [ 00:15:26.446 { 00:15:26.446 "name": null, 00:15:26.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.446 "is_configured": false, 00:15:26.446 "data_offset": 0, 00:15:26.446 "data_size": 65536 00:15:26.446 }, 00:15:26.446 { 00:15:26.446 "name": "BaseBdev2", 00:15:26.446 "uuid": "77b2d12d-688e-5035-9122-621a4dcb488c", 00:15:26.446 "is_configured": true, 00:15:26.446 "data_offset": 0, 00:15:26.446 "data_size": 65536 00:15:26.446 } 00:15:26.446 ] 00:15:26.446 }' 00:15:26.446 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.446 15:42:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.705 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.705 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.705 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.705 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.705 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.705 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.705 15:42:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.705 15:42:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.705 15:42:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.964 15:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.964 15:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.964 "name": "raid_bdev1", 00:15:26.964 "uuid": "0cac6929-5c79-4144-a8f0-0b15c33e365d", 00:15:26.964 "strip_size_kb": 0, 00:15:26.964 "state": "online", 00:15:26.964 "raid_level": "raid1", 00:15:26.964 "superblock": false, 00:15:26.964 "num_base_bdevs": 2, 00:15:26.965 "num_base_bdevs_discovered": 1, 00:15:26.965 "num_base_bdevs_operational": 1, 00:15:26.965 "base_bdevs_list": [ 00:15:26.965 { 00:15:26.965 "name": null, 00:15:26.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.965 "is_configured": false, 00:15:26.965 "data_offset": 0, 00:15:26.965 "data_size": 65536 00:15:26.965 }, 00:15:26.965 { 00:15:26.965 "name": "BaseBdev2", 00:15:26.965 "uuid": "77b2d12d-688e-5035-9122-621a4dcb488c", 00:15:26.965 "is_configured": true, 00:15:26.965 "data_offset": 0, 00:15:26.965 "data_size": 65536 00:15:26.965 } 00:15:26.965 ] 00:15:26.965 }' 00:15:26.965 15:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.965 15:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.965 15:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.965 15:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.965 15:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:26.965 15:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.965 15:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.965 [2024-12-06 15:42:10.118730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.965 [2024-12-06 15:42:10.138810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:15:26.965 15:42:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.965 15:42:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:26.965 [2024-12-06 15:42:10.141559] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:27.902 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.903 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.903 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.903 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.903 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.903 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.903 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.903 15:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.903 15:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.903 15:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.162 "name": "raid_bdev1", 00:15:28.162 "uuid": "0cac6929-5c79-4144-a8f0-0b15c33e365d", 00:15:28.162 "strip_size_kb": 0, 00:15:28.162 "state": "online", 00:15:28.162 "raid_level": "raid1", 00:15:28.162 "superblock": false, 00:15:28.162 "num_base_bdevs": 2, 00:15:28.162 "num_base_bdevs_discovered": 2, 00:15:28.162 "num_base_bdevs_operational": 2, 00:15:28.162 "process": { 00:15:28.162 "type": "rebuild", 00:15:28.162 "target": "spare", 00:15:28.162 "progress": { 00:15:28.162 "blocks": 20480, 00:15:28.162 "percent": 31 00:15:28.162 } 00:15:28.162 }, 00:15:28.162 "base_bdevs_list": [ 00:15:28.162 { 00:15:28.162 "name": "spare", 00:15:28.162 "uuid": "f1ed2730-794d-52c6-8a90-2bbde5fb466e", 00:15:28.162 "is_configured": true, 00:15:28.162 "data_offset": 0, 00:15:28.162 "data_size": 65536 00:15:28.162 }, 00:15:28.162 { 00:15:28.162 "name": "BaseBdev2", 00:15:28.162 "uuid": "77b2d12d-688e-5035-9122-621a4dcb488c", 00:15:28.162 "is_configured": true, 00:15:28.162 "data_offset": 0, 00:15:28.162 "data_size": 65536 00:15:28.162 } 00:15:28.162 ] 00:15:28.162 }' 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=379 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.162 "name": "raid_bdev1", 00:15:28.162 "uuid": "0cac6929-5c79-4144-a8f0-0b15c33e365d", 00:15:28.162 "strip_size_kb": 0, 00:15:28.162 "state": "online", 00:15:28.162 "raid_level": "raid1", 00:15:28.162 "superblock": false, 00:15:28.162 "num_base_bdevs": 2, 00:15:28.162 "num_base_bdevs_discovered": 2, 00:15:28.162 "num_base_bdevs_operational": 2, 00:15:28.162 "process": { 00:15:28.162 "type": "rebuild", 00:15:28.162 "target": "spare", 00:15:28.162 "progress": { 00:15:28.162 "blocks": 22528, 00:15:28.162 "percent": 34 00:15:28.162 } 00:15:28.162 }, 00:15:28.162 "base_bdevs_list": [ 00:15:28.162 { 00:15:28.162 "name": "spare", 00:15:28.162 "uuid": "f1ed2730-794d-52c6-8a90-2bbde5fb466e", 00:15:28.162 "is_configured": true, 00:15:28.162 "data_offset": 0, 00:15:28.162 "data_size": 65536 00:15:28.162 }, 00:15:28.162 { 00:15:28.162 "name": "BaseBdev2", 00:15:28.162 "uuid": "77b2d12d-688e-5035-9122-621a4dcb488c", 00:15:28.162 "is_configured": true, 00:15:28.162 "data_offset": 0, 00:15:28.162 "data_size": 65536 00:15:28.162 } 00:15:28.162 ] 00:15:28.162 }' 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.162 15:42:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.543 "name": "raid_bdev1", 00:15:29.543 "uuid": "0cac6929-5c79-4144-a8f0-0b15c33e365d", 00:15:29.543 "strip_size_kb": 0, 00:15:29.543 "state": "online", 00:15:29.543 "raid_level": "raid1", 00:15:29.543 "superblock": false, 00:15:29.543 "num_base_bdevs": 2, 00:15:29.543 "num_base_bdevs_discovered": 2, 00:15:29.543 "num_base_bdevs_operational": 2, 00:15:29.543 "process": { 00:15:29.543 "type": "rebuild", 00:15:29.543 "target": "spare", 00:15:29.543 "progress": { 00:15:29.543 "blocks": 45056, 00:15:29.543 "percent": 68 00:15:29.543 } 00:15:29.543 }, 00:15:29.543 "base_bdevs_list": [ 00:15:29.543 { 00:15:29.543 "name": "spare", 00:15:29.543 "uuid": "f1ed2730-794d-52c6-8a90-2bbde5fb466e", 00:15:29.543 "is_configured": true, 00:15:29.543 "data_offset": 0, 00:15:29.543 "data_size": 65536 00:15:29.543 }, 00:15:29.543 { 00:15:29.543 "name": "BaseBdev2", 00:15:29.543 "uuid": "77b2d12d-688e-5035-9122-621a4dcb488c", 00:15:29.543 "is_configured": true, 00:15:29.543 "data_offset": 0, 00:15:29.543 "data_size": 65536 00:15:29.543 } 00:15:29.543 ] 00:15:29.543 }' 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.543 15:42:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.112 [2024-12-06 15:42:13.367057] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:30.112 [2024-12-06 15:42:13.367154] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:30.112 [2024-12-06 15:42:13.367216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.371 "name": "raid_bdev1", 00:15:30.371 "uuid": "0cac6929-5c79-4144-a8f0-0b15c33e365d", 00:15:30.371 "strip_size_kb": 0, 00:15:30.371 "state": "online", 00:15:30.371 "raid_level": "raid1", 00:15:30.371 "superblock": false, 00:15:30.371 "num_base_bdevs": 2, 00:15:30.371 "num_base_bdevs_discovered": 2, 00:15:30.371 "num_base_bdevs_operational": 2, 00:15:30.371 "base_bdevs_list": [ 00:15:30.371 { 00:15:30.371 "name": "spare", 00:15:30.371 "uuid": "f1ed2730-794d-52c6-8a90-2bbde5fb466e", 00:15:30.371 "is_configured": true, 00:15:30.371 "data_offset": 0, 00:15:30.371 "data_size": 65536 00:15:30.371 }, 00:15:30.371 { 00:15:30.371 "name": "BaseBdev2", 00:15:30.371 "uuid": "77b2d12d-688e-5035-9122-621a4dcb488c", 00:15:30.371 "is_configured": true, 00:15:30.371 "data_offset": 0, 00:15:30.371 "data_size": 65536 00:15:30.371 } 00:15:30.371 ] 00:15:30.371 }' 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.371 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:30.372 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:30.372 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.630 "name": "raid_bdev1", 00:15:30.630 "uuid": "0cac6929-5c79-4144-a8f0-0b15c33e365d", 00:15:30.630 "strip_size_kb": 0, 00:15:30.630 "state": "online", 00:15:30.630 "raid_level": "raid1", 00:15:30.630 "superblock": false, 00:15:30.630 "num_base_bdevs": 2, 00:15:30.630 "num_base_bdevs_discovered": 2, 00:15:30.630 "num_base_bdevs_operational": 2, 00:15:30.630 "base_bdevs_list": [ 00:15:30.630 { 00:15:30.630 "name": "spare", 00:15:30.630 "uuid": "f1ed2730-794d-52c6-8a90-2bbde5fb466e", 00:15:30.630 "is_configured": true, 00:15:30.630 "data_offset": 0, 00:15:30.630 "data_size": 65536 00:15:30.630 }, 00:15:30.630 { 00:15:30.630 "name": "BaseBdev2", 00:15:30.630 "uuid": "77b2d12d-688e-5035-9122-621a4dcb488c", 00:15:30.630 "is_configured": true, 00:15:30.630 "data_offset": 0, 00:15:30.630 "data_size": 65536 00:15:30.630 } 00:15:30.630 ] 00:15:30.630 }' 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.630 "name": "raid_bdev1", 00:15:30.630 "uuid": "0cac6929-5c79-4144-a8f0-0b15c33e365d", 00:15:30.630 "strip_size_kb": 0, 00:15:30.630 "state": "online", 00:15:30.630 "raid_level": "raid1", 00:15:30.630 "superblock": false, 00:15:30.630 "num_base_bdevs": 2, 00:15:30.630 "num_base_bdevs_discovered": 2, 00:15:30.630 "num_base_bdevs_operational": 2, 00:15:30.630 "base_bdevs_list": [ 00:15:30.630 { 00:15:30.630 "name": "spare", 00:15:30.630 "uuid": "f1ed2730-794d-52c6-8a90-2bbde5fb466e", 00:15:30.630 "is_configured": true, 00:15:30.630 "data_offset": 0, 00:15:30.630 "data_size": 65536 00:15:30.630 }, 00:15:30.630 { 00:15:30.630 "name": "BaseBdev2", 00:15:30.630 "uuid": "77b2d12d-688e-5035-9122-621a4dcb488c", 00:15:30.630 "is_configured": true, 00:15:30.630 "data_offset": 0, 00:15:30.630 "data_size": 65536 00:15:30.630 } 00:15:30.630 ] 00:15:30.630 }' 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.630 15:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.240 [2024-12-06 15:42:14.205642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.240 [2024-12-06 15:42:14.205681] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.240 [2024-12-06 15:42:14.205789] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.240 [2024-12-06 15:42:14.205878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.240 [2024-12-06 15:42:14.205891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:31.240 /dev/nbd0 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:31.240 1+0 records in 00:15:31.240 1+0 records out 00:15:31.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403714 s, 10.1 MB/s 00:15:31.240 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:31.499 /dev/nbd1 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:31.499 1+0 records in 00:15:31.499 1+0 records out 00:15:31.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043816 s, 9.3 MB/s 00:15:31.499 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.758 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:31.758 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.758 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:31.758 15:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:31.758 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:31.758 15:42:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:31.758 15:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:31.758 15:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:31.758 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:31.758 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:31.758 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:31.758 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:31.758 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.758 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:32.015 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:32.015 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:32.015 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:32.015 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.015 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.015 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:32.015 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:32.015 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.015 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.015 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75346 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75346 ']' 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75346 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75346 00:15:32.273 killing process with pid 75346 00:15:32.273 Received shutdown signal, test time was about 60.000000 seconds 00:15:32.273 00:15:32.273 Latency(us) 00:15:32.273 [2024-12-06T15:42:15.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.273 [2024-12-06T15:42:15.568Z] =================================================================================================================== 00:15:32.273 [2024-12-06T15:42:15.568Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75346' 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75346 00:15:32.273 [2024-12-06 15:42:15.524718] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:32.273 15:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75346 00:15:32.838 [2024-12-06 15:42:15.861127] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:34.215 00:15:34.215 real 0m16.545s 00:15:34.215 user 0m18.097s 00:15:34.215 sys 0m3.985s 00:15:34.215 ************************************ 00:15:34.215 END TEST raid_rebuild_test 00:15:34.215 ************************************ 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.215 15:42:17 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:15:34.215 15:42:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:34.215 15:42:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.215 15:42:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:34.215 ************************************ 00:15:34.215 START TEST raid_rebuild_test_sb 00:15:34.215 ************************************ 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75776 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75776 00:15:34.215 15:42:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75776 ']' 00:15:34.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.216 15:42:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.216 15:42:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.216 15:42:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.216 15:42:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.216 15:42:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.216 [2024-12-06 15:42:17.306876] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:15:34.216 [2024-12-06 15:42:17.307246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75776 ] 00:15:34.216 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:34.216 Zero copy mechanism will not be used. 00:15:34.216 [2024-12-06 15:42:17.494760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.475 [2024-12-06 15:42:17.631674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.734 [2024-12-06 15:42:17.886593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.734 [2024-12-06 15:42:17.886634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.993 BaseBdev1_malloc 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.993 [2024-12-06 15:42:18.205169] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:34.993 [2024-12-06 15:42:18.205385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.993 [2024-12-06 15:42:18.205494] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:34.993 [2024-12-06 15:42:18.205595] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.993 [2024-12-06 15:42:18.208315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.993 BaseBdev1 00:15:34.993 [2024-12-06 15:42:18.208466] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.993 BaseBdev2_malloc 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.993 [2024-12-06 15:42:18.264140] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:34.993 [2024-12-06 15:42:18.264320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.993 [2024-12-06 15:42:18.264380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:34.993 [2024-12-06 15:42:18.264473] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.993 [2024-12-06 15:42:18.267212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.993 [2024-12-06 15:42:18.267353] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:34.993 BaseBdev2 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.993 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.254 spare_malloc 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.254 spare_delay 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.254 [2024-12-06 15:42:18.352383] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:35.254 [2024-12-06 15:42:18.352457] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.254 [2024-12-06 15:42:18.352480] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:35.254 [2024-12-06 15:42:18.352496] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.254 [2024-12-06 15:42:18.355228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.254 [2024-12-06 15:42:18.355275] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:35.254 spare 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.254 [2024-12-06 15:42:18.364443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.254 [2024-12-06 15:42:18.366915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.254 [2024-12-06 15:42:18.367220] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:35.254 [2024-12-06 15:42:18.367321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:35.254 [2024-12-06 15:42:18.367636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:35.254 [2024-12-06 15:42:18.367899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:35.254 [2024-12-06 15:42:18.367917] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:35.254 [2024-12-06 15:42:18.368073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.254 "name": "raid_bdev1", 00:15:35.254 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:35.254 "strip_size_kb": 0, 00:15:35.254 "state": "online", 00:15:35.254 "raid_level": "raid1", 00:15:35.254 "superblock": true, 00:15:35.254 "num_base_bdevs": 2, 00:15:35.254 "num_base_bdevs_discovered": 2, 00:15:35.254 "num_base_bdevs_operational": 2, 00:15:35.254 "base_bdevs_list": [ 00:15:35.254 { 00:15:35.254 "name": "BaseBdev1", 00:15:35.254 "uuid": "99dff676-0e4a-5fac-969b-43ca80869686", 00:15:35.254 "is_configured": true, 00:15:35.254 "data_offset": 2048, 00:15:35.254 "data_size": 63488 00:15:35.254 }, 00:15:35.254 { 00:15:35.254 "name": "BaseBdev2", 00:15:35.254 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:35.254 "is_configured": true, 00:15:35.254 "data_offset": 2048, 00:15:35.254 "data_size": 63488 00:15:35.254 } 00:15:35.254 ] 00:15:35.254 }' 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.254 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.513 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:35.513 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:35.513 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.513 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.772 [2024-12-06 15:42:18.808161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:35.772 15:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:36.031 [2024-12-06 15:42:19.099524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:36.031 /dev/nbd0 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.031 1+0 records in 00:15:36.031 1+0 records out 00:15:36.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400034 s, 10.2 MB/s 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:36.031 15:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:41.335 63488+0 records in 00:15:41.335 63488+0 records out 00:15:41.335 32505856 bytes (33 MB, 31 MiB) copied, 4.5852 s, 7.1 MB/s 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:41.335 [2024-12-06 15:42:23.970307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.335 15:42:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.335 [2024-12-06 15:42:24.006356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.335 15:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.335 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:41.335 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.335 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.335 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.335 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.335 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:41.335 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.335 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.336 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.336 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.336 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.336 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.336 15:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.336 15:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.336 15:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.336 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.336 "name": "raid_bdev1", 00:15:41.336 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:41.336 "strip_size_kb": 0, 00:15:41.336 "state": "online", 00:15:41.336 "raid_level": "raid1", 00:15:41.336 "superblock": true, 00:15:41.336 "num_base_bdevs": 2, 00:15:41.336 "num_base_bdevs_discovered": 1, 00:15:41.336 "num_base_bdevs_operational": 1, 00:15:41.336 "base_bdevs_list": [ 00:15:41.336 { 00:15:41.336 "name": null, 00:15:41.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.336 "is_configured": false, 00:15:41.336 "data_offset": 0, 00:15:41.336 "data_size": 63488 00:15:41.336 }, 00:15:41.336 { 00:15:41.336 "name": "BaseBdev2", 00:15:41.336 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:41.336 "is_configured": true, 00:15:41.336 "data_offset": 2048, 00:15:41.336 "data_size": 63488 00:15:41.336 } 00:15:41.336 ] 00:15:41.336 }' 00:15:41.336 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.336 15:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.336 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:41.336 15:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.336 15:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.336 [2024-12-06 15:42:24.434345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.336 [2024-12-06 15:42:24.454486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:15:41.336 15:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.336 15:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:41.336 [2024-12-06 15:42:24.456890] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.273 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.273 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.273 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.273 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.273 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.273 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.273 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.273 15:42:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.273 15:42:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.273 15:42:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.273 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.273 "name": "raid_bdev1", 00:15:42.273 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:42.273 "strip_size_kb": 0, 00:15:42.273 "state": "online", 00:15:42.273 "raid_level": "raid1", 00:15:42.273 "superblock": true, 00:15:42.273 "num_base_bdevs": 2, 00:15:42.273 "num_base_bdevs_discovered": 2, 00:15:42.273 "num_base_bdevs_operational": 2, 00:15:42.273 "process": { 00:15:42.273 "type": "rebuild", 00:15:42.273 "target": "spare", 00:15:42.273 "progress": { 00:15:42.273 "blocks": 20480, 00:15:42.273 "percent": 32 00:15:42.273 } 00:15:42.273 }, 00:15:42.273 "base_bdevs_list": [ 00:15:42.273 { 00:15:42.273 "name": "spare", 00:15:42.273 "uuid": "67dd631e-50ee-5b7b-bdb5-c5dea3a55c62", 00:15:42.274 "is_configured": true, 00:15:42.274 "data_offset": 2048, 00:15:42.274 "data_size": 63488 00:15:42.274 }, 00:15:42.274 { 00:15:42.274 "name": "BaseBdev2", 00:15:42.274 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:42.274 "is_configured": true, 00:15:42.274 "data_offset": 2048, 00:15:42.274 "data_size": 63488 00:15:42.274 } 00:15:42.274 ] 00:15:42.274 }' 00:15:42.274 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.274 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.274 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.531 [2024-12-06 15:42:25.608711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.531 [2024-12-06 15:42:25.666370] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:42.531 [2024-12-06 15:42:25.666613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.531 [2024-12-06 15:42:25.666710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.531 [2024-12-06 15:42:25.666754] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.531 "name": "raid_bdev1", 00:15:42.531 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:42.531 "strip_size_kb": 0, 00:15:42.531 "state": "online", 00:15:42.531 "raid_level": "raid1", 00:15:42.531 "superblock": true, 00:15:42.531 "num_base_bdevs": 2, 00:15:42.531 "num_base_bdevs_discovered": 1, 00:15:42.531 "num_base_bdevs_operational": 1, 00:15:42.531 "base_bdevs_list": [ 00:15:42.531 { 00:15:42.531 "name": null, 00:15:42.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.531 "is_configured": false, 00:15:42.531 "data_offset": 0, 00:15:42.531 "data_size": 63488 00:15:42.531 }, 00:15:42.531 { 00:15:42.531 "name": "BaseBdev2", 00:15:42.531 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:42.531 "is_configured": true, 00:15:42.531 "data_offset": 2048, 00:15:42.531 "data_size": 63488 00:15:42.531 } 00:15:42.531 ] 00:15:42.531 }' 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.531 15:42:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.098 "name": "raid_bdev1", 00:15:43.098 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:43.098 "strip_size_kb": 0, 00:15:43.098 "state": "online", 00:15:43.098 "raid_level": "raid1", 00:15:43.098 "superblock": true, 00:15:43.098 "num_base_bdevs": 2, 00:15:43.098 "num_base_bdevs_discovered": 1, 00:15:43.098 "num_base_bdevs_operational": 1, 00:15:43.098 "base_bdevs_list": [ 00:15:43.098 { 00:15:43.098 "name": null, 00:15:43.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.098 "is_configured": false, 00:15:43.098 "data_offset": 0, 00:15:43.098 "data_size": 63488 00:15:43.098 }, 00:15:43.098 { 00:15:43.098 "name": "BaseBdev2", 00:15:43.098 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:43.098 "is_configured": true, 00:15:43.098 "data_offset": 2048, 00:15:43.098 "data_size": 63488 00:15:43.098 } 00:15:43.098 ] 00:15:43.098 }' 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.098 [2024-12-06 15:42:26.282699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.098 [2024-12-06 15:42:26.301322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.098 15:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:43.098 [2024-12-06 15:42:26.303815] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:44.188 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.188 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.188 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.188 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.188 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.188 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.188 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.188 15:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.188 15:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.188 15:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.188 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.188 "name": "raid_bdev1", 00:15:44.188 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:44.188 "strip_size_kb": 0, 00:15:44.188 "state": "online", 00:15:44.188 "raid_level": "raid1", 00:15:44.188 "superblock": true, 00:15:44.188 "num_base_bdevs": 2, 00:15:44.188 "num_base_bdevs_discovered": 2, 00:15:44.188 "num_base_bdevs_operational": 2, 00:15:44.188 "process": { 00:15:44.188 "type": "rebuild", 00:15:44.188 "target": "spare", 00:15:44.188 "progress": { 00:15:44.188 "blocks": 20480, 00:15:44.188 "percent": 32 00:15:44.188 } 00:15:44.188 }, 00:15:44.188 "base_bdevs_list": [ 00:15:44.188 { 00:15:44.188 "name": "spare", 00:15:44.188 "uuid": "67dd631e-50ee-5b7b-bdb5-c5dea3a55c62", 00:15:44.188 "is_configured": true, 00:15:44.188 "data_offset": 2048, 00:15:44.188 "data_size": 63488 00:15:44.188 }, 00:15:44.188 { 00:15:44.188 "name": "BaseBdev2", 00:15:44.188 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:44.188 "is_configured": true, 00:15:44.188 "data_offset": 2048, 00:15:44.188 "data_size": 63488 00:15:44.188 } 00:15:44.188 ] 00:15:44.188 }' 00:15:44.188 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.188 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.188 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:44.189 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=395 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.189 "name": "raid_bdev1", 00:15:44.189 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:44.189 "strip_size_kb": 0, 00:15:44.189 "state": "online", 00:15:44.189 "raid_level": "raid1", 00:15:44.189 "superblock": true, 00:15:44.189 "num_base_bdevs": 2, 00:15:44.189 "num_base_bdevs_discovered": 2, 00:15:44.189 "num_base_bdevs_operational": 2, 00:15:44.189 "process": { 00:15:44.189 "type": "rebuild", 00:15:44.189 "target": "spare", 00:15:44.189 "progress": { 00:15:44.189 "blocks": 22528, 00:15:44.189 "percent": 35 00:15:44.189 } 00:15:44.189 }, 00:15:44.189 "base_bdevs_list": [ 00:15:44.189 { 00:15:44.189 "name": "spare", 00:15:44.189 "uuid": "67dd631e-50ee-5b7b-bdb5-c5dea3a55c62", 00:15:44.189 "is_configured": true, 00:15:44.189 "data_offset": 2048, 00:15:44.189 "data_size": 63488 00:15:44.189 }, 00:15:44.189 { 00:15:44.189 "name": "BaseBdev2", 00:15:44.189 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:44.189 "is_configured": true, 00:15:44.189 "data_offset": 2048, 00:15:44.189 "data_size": 63488 00:15:44.189 } 00:15:44.189 ] 00:15:44.189 }' 00:15:44.189 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.447 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.447 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.447 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.447 15:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.382 15:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.382 15:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.382 15:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.382 15:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.382 15:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.382 15:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.382 15:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.382 15:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.382 15:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.382 15:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.382 15:42:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.382 15:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.382 "name": "raid_bdev1", 00:15:45.382 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:45.382 "strip_size_kb": 0, 00:15:45.382 "state": "online", 00:15:45.382 "raid_level": "raid1", 00:15:45.382 "superblock": true, 00:15:45.382 "num_base_bdevs": 2, 00:15:45.382 "num_base_bdevs_discovered": 2, 00:15:45.382 "num_base_bdevs_operational": 2, 00:15:45.382 "process": { 00:15:45.382 "type": "rebuild", 00:15:45.382 "target": "spare", 00:15:45.382 "progress": { 00:15:45.382 "blocks": 45056, 00:15:45.382 "percent": 70 00:15:45.382 } 00:15:45.382 }, 00:15:45.382 "base_bdevs_list": [ 00:15:45.382 { 00:15:45.382 "name": "spare", 00:15:45.382 "uuid": "67dd631e-50ee-5b7b-bdb5-c5dea3a55c62", 00:15:45.382 "is_configured": true, 00:15:45.382 "data_offset": 2048, 00:15:45.382 "data_size": 63488 00:15:45.382 }, 00:15:45.382 { 00:15:45.382 "name": "BaseBdev2", 00:15:45.382 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:45.382 "is_configured": true, 00:15:45.382 "data_offset": 2048, 00:15:45.382 "data_size": 63488 00:15:45.382 } 00:15:45.382 ] 00:15:45.382 }' 00:15:45.382 15:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.382 15:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.382 15:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.641 15:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.641 15:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.209 [2024-12-06 15:42:29.429442] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:46.209 [2024-12-06 15:42:29.429830] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:46.209 [2024-12-06 15:42:29.429998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.468 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.468 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.468 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.468 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.468 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.468 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.468 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.468 15:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.468 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.468 15:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.468 15:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.468 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.468 "name": "raid_bdev1", 00:15:46.468 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:46.468 "strip_size_kb": 0, 00:15:46.468 "state": "online", 00:15:46.468 "raid_level": "raid1", 00:15:46.468 "superblock": true, 00:15:46.468 "num_base_bdevs": 2, 00:15:46.468 "num_base_bdevs_discovered": 2, 00:15:46.468 "num_base_bdevs_operational": 2, 00:15:46.468 "base_bdevs_list": [ 00:15:46.468 { 00:15:46.468 "name": "spare", 00:15:46.468 "uuid": "67dd631e-50ee-5b7b-bdb5-c5dea3a55c62", 00:15:46.468 "is_configured": true, 00:15:46.468 "data_offset": 2048, 00:15:46.469 "data_size": 63488 00:15:46.469 }, 00:15:46.469 { 00:15:46.469 "name": "BaseBdev2", 00:15:46.469 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:46.469 "is_configured": true, 00:15:46.469 "data_offset": 2048, 00:15:46.469 "data_size": 63488 00:15:46.469 } 00:15:46.469 ] 00:15:46.469 }' 00:15:46.469 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.729 "name": "raid_bdev1", 00:15:46.729 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:46.729 "strip_size_kb": 0, 00:15:46.729 "state": "online", 00:15:46.729 "raid_level": "raid1", 00:15:46.729 "superblock": true, 00:15:46.729 "num_base_bdevs": 2, 00:15:46.729 "num_base_bdevs_discovered": 2, 00:15:46.729 "num_base_bdevs_operational": 2, 00:15:46.729 "base_bdevs_list": [ 00:15:46.729 { 00:15:46.729 "name": "spare", 00:15:46.729 "uuid": "67dd631e-50ee-5b7b-bdb5-c5dea3a55c62", 00:15:46.729 "is_configured": true, 00:15:46.729 "data_offset": 2048, 00:15:46.729 "data_size": 63488 00:15:46.729 }, 00:15:46.729 { 00:15:46.729 "name": "BaseBdev2", 00:15:46.729 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:46.729 "is_configured": true, 00:15:46.729 "data_offset": 2048, 00:15:46.729 "data_size": 63488 00:15:46.729 } 00:15:46.729 ] 00:15:46.729 }' 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.729 15:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.729 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.987 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.987 "name": "raid_bdev1", 00:15:46.987 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:46.987 "strip_size_kb": 0, 00:15:46.987 "state": "online", 00:15:46.987 "raid_level": "raid1", 00:15:46.988 "superblock": true, 00:15:46.988 "num_base_bdevs": 2, 00:15:46.988 "num_base_bdevs_discovered": 2, 00:15:46.988 "num_base_bdevs_operational": 2, 00:15:46.988 "base_bdevs_list": [ 00:15:46.988 { 00:15:46.988 "name": "spare", 00:15:46.988 "uuid": "67dd631e-50ee-5b7b-bdb5-c5dea3a55c62", 00:15:46.988 "is_configured": true, 00:15:46.988 "data_offset": 2048, 00:15:46.988 "data_size": 63488 00:15:46.988 }, 00:15:46.988 { 00:15:46.988 "name": "BaseBdev2", 00:15:46.988 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:46.988 "is_configured": true, 00:15:46.988 "data_offset": 2048, 00:15:46.988 "data_size": 63488 00:15:46.988 } 00:15:46.988 ] 00:15:46.988 }' 00:15:46.988 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.988 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.246 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:47.246 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.246 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.246 [2024-12-06 15:42:30.400426] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.246 [2024-12-06 15:42:30.400472] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.246 [2024-12-06 15:42:30.400610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.246 [2024-12-06 15:42:30.400704] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.246 [2024-12-06 15:42:30.400722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:47.246 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.246 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.246 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.246 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:47.246 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.246 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.246 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:47.246 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:47.247 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:47.247 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:47.247 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:47.247 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:47.247 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:47.247 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.247 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:47.247 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:47.247 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:47.247 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:47.247 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:47.506 /dev/nbd0 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.506 1+0 records in 00:15:47.506 1+0 records out 00:15:47.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365528 s, 11.2 MB/s 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:47.506 15:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:47.765 /dev/nbd1 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.765 1+0 records in 00:15:47.765 1+0 records out 00:15:47.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597969 s, 6.8 MB/s 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:47.765 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:48.025 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:48.025 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:48.025 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:48.025 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:48.025 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:48.025 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.025 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:48.284 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:48.284 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:48.284 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:48.284 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.284 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.284 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:48.284 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:48.284 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.284 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.284 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.543 [2024-12-06 15:42:31.720360] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:48.543 [2024-12-06 15:42:31.720624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.543 [2024-12-06 15:42:31.720680] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:48.543 [2024-12-06 15:42:31.720694] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.543 [2024-12-06 15:42:31.723806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.543 [2024-12-06 15:42:31.723854] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:48.543 [2024-12-06 15:42:31.724000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:48.543 [2024-12-06 15:42:31.724075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:48.543 [2024-12-06 15:42:31.724257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.543 spare 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.543 [2024-12-06 15:42:31.824271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:48.543 [2024-12-06 15:42:31.824602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:48.543 [2024-12-06 15:42:31.825096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:15:48.543 [2024-12-06 15:42:31.825468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:48.543 [2024-12-06 15:42:31.825582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:48.543 [2024-12-06 15:42:31.825910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.543 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.803 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.803 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.803 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.803 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.803 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.803 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.803 "name": "raid_bdev1", 00:15:48.803 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:48.803 "strip_size_kb": 0, 00:15:48.803 "state": "online", 00:15:48.803 "raid_level": "raid1", 00:15:48.803 "superblock": true, 00:15:48.803 "num_base_bdevs": 2, 00:15:48.803 "num_base_bdevs_discovered": 2, 00:15:48.803 "num_base_bdevs_operational": 2, 00:15:48.803 "base_bdevs_list": [ 00:15:48.803 { 00:15:48.803 "name": "spare", 00:15:48.803 "uuid": "67dd631e-50ee-5b7b-bdb5-c5dea3a55c62", 00:15:48.803 "is_configured": true, 00:15:48.803 "data_offset": 2048, 00:15:48.803 "data_size": 63488 00:15:48.803 }, 00:15:48.803 { 00:15:48.803 "name": "BaseBdev2", 00:15:48.803 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:48.803 "is_configured": true, 00:15:48.803 "data_offset": 2048, 00:15:48.803 "data_size": 63488 00:15:48.803 } 00:15:48.803 ] 00:15:48.803 }' 00:15:48.803 15:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.803 15:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.062 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.062 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.062 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.062 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.062 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.062 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.062 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.062 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.062 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.062 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.062 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.062 "name": "raid_bdev1", 00:15:49.062 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:49.062 "strip_size_kb": 0, 00:15:49.062 "state": "online", 00:15:49.062 "raid_level": "raid1", 00:15:49.062 "superblock": true, 00:15:49.062 "num_base_bdevs": 2, 00:15:49.062 "num_base_bdevs_discovered": 2, 00:15:49.062 "num_base_bdevs_operational": 2, 00:15:49.062 "base_bdevs_list": [ 00:15:49.062 { 00:15:49.062 "name": "spare", 00:15:49.062 "uuid": "67dd631e-50ee-5b7b-bdb5-c5dea3a55c62", 00:15:49.062 "is_configured": true, 00:15:49.062 "data_offset": 2048, 00:15:49.062 "data_size": 63488 00:15:49.062 }, 00:15:49.062 { 00:15:49.062 "name": "BaseBdev2", 00:15:49.062 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:49.062 "is_configured": true, 00:15:49.062 "data_offset": 2048, 00:15:49.062 "data_size": 63488 00:15:49.062 } 00:15:49.062 ] 00:15:49.062 }' 00:15:49.062 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.062 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.062 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.321 [2024-12-06 15:42:32.435668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.321 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.321 "name": "raid_bdev1", 00:15:49.321 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:49.321 "strip_size_kb": 0, 00:15:49.321 "state": "online", 00:15:49.321 "raid_level": "raid1", 00:15:49.321 "superblock": true, 00:15:49.321 "num_base_bdevs": 2, 00:15:49.321 "num_base_bdevs_discovered": 1, 00:15:49.322 "num_base_bdevs_operational": 1, 00:15:49.322 "base_bdevs_list": [ 00:15:49.322 { 00:15:49.322 "name": null, 00:15:49.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.322 "is_configured": false, 00:15:49.322 "data_offset": 0, 00:15:49.322 "data_size": 63488 00:15:49.322 }, 00:15:49.322 { 00:15:49.322 "name": "BaseBdev2", 00:15:49.322 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:49.322 "is_configured": true, 00:15:49.322 "data_offset": 2048, 00:15:49.322 "data_size": 63488 00:15:49.322 } 00:15:49.322 ] 00:15:49.322 }' 00:15:49.322 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.322 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.580 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:49.581 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.581 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.581 [2024-12-06 15:42:32.871129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.581 [2024-12-06 15:42:32.871402] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:49.581 [2024-12-06 15:42:32.871422] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:49.581 [2024-12-06 15:42:32.871471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.840 [2024-12-06 15:42:32.889981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:15:49.840 15:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.840 15:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:49.840 [2024-12-06 15:42:32.892605] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:50.776 15:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.776 15:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.776 15:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.776 15:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.776 15:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.776 15:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.776 15:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.776 15:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.776 15:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.776 15:42:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.776 15:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.776 "name": "raid_bdev1", 00:15:50.776 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:50.776 "strip_size_kb": 0, 00:15:50.776 "state": "online", 00:15:50.776 "raid_level": "raid1", 00:15:50.776 "superblock": true, 00:15:50.776 "num_base_bdevs": 2, 00:15:50.776 "num_base_bdevs_discovered": 2, 00:15:50.776 "num_base_bdevs_operational": 2, 00:15:50.776 "process": { 00:15:50.776 "type": "rebuild", 00:15:50.776 "target": "spare", 00:15:50.776 "progress": { 00:15:50.776 "blocks": 20480, 00:15:50.776 "percent": 32 00:15:50.776 } 00:15:50.776 }, 00:15:50.776 "base_bdevs_list": [ 00:15:50.776 { 00:15:50.776 "name": "spare", 00:15:50.776 "uuid": "67dd631e-50ee-5b7b-bdb5-c5dea3a55c62", 00:15:50.776 "is_configured": true, 00:15:50.776 "data_offset": 2048, 00:15:50.776 "data_size": 63488 00:15:50.776 }, 00:15:50.776 { 00:15:50.776 "name": "BaseBdev2", 00:15:50.776 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:50.776 "is_configured": true, 00:15:50.776 "data_offset": 2048, 00:15:50.776 "data_size": 63488 00:15:50.776 } 00:15:50.776 ] 00:15:50.776 }' 00:15:50.776 15:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.776 15:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.776 15:42:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.776 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.776 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:50.776 15:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.776 15:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.776 [2024-12-06 15:42:34.033143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.035 [2024-12-06 15:42:34.101809] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:51.035 [2024-12-06 15:42:34.101890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.035 [2024-12-06 15:42:34.101909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.035 [2024-12-06 15:42:34.101923] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.035 "name": "raid_bdev1", 00:15:51.035 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:51.035 "strip_size_kb": 0, 00:15:51.035 "state": "online", 00:15:51.035 "raid_level": "raid1", 00:15:51.035 "superblock": true, 00:15:51.035 "num_base_bdevs": 2, 00:15:51.035 "num_base_bdevs_discovered": 1, 00:15:51.035 "num_base_bdevs_operational": 1, 00:15:51.035 "base_bdevs_list": [ 00:15:51.035 { 00:15:51.035 "name": null, 00:15:51.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.035 "is_configured": false, 00:15:51.035 "data_offset": 0, 00:15:51.035 "data_size": 63488 00:15:51.035 }, 00:15:51.035 { 00:15:51.035 "name": "BaseBdev2", 00:15:51.035 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:51.035 "is_configured": true, 00:15:51.035 "data_offset": 2048, 00:15:51.035 "data_size": 63488 00:15:51.035 } 00:15:51.035 ] 00:15:51.035 }' 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.035 15:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.602 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:51.602 15:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.602 15:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.602 [2024-12-06 15:42:34.613560] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:51.602 [2024-12-06 15:42:34.613781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.602 [2024-12-06 15:42:34.613849] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:51.602 [2024-12-06 15:42:34.614051] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.602 [2024-12-06 15:42:34.614728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.602 [2024-12-06 15:42:34.614768] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:51.602 [2024-12-06 15:42:34.614891] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:51.602 [2024-12-06 15:42:34.614911] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:51.602 [2024-12-06 15:42:34.614924] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:51.602 [2024-12-06 15:42:34.614957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:51.602 [2024-12-06 15:42:34.633236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:51.602 spare 00:15:51.602 15:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.602 15:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:51.602 [2024-12-06 15:42:34.635984] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.537 "name": "raid_bdev1", 00:15:52.537 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:52.537 "strip_size_kb": 0, 00:15:52.537 "state": "online", 00:15:52.537 "raid_level": "raid1", 00:15:52.537 "superblock": true, 00:15:52.537 "num_base_bdevs": 2, 00:15:52.537 "num_base_bdevs_discovered": 2, 00:15:52.537 "num_base_bdevs_operational": 2, 00:15:52.537 "process": { 00:15:52.537 "type": "rebuild", 00:15:52.537 "target": "spare", 00:15:52.537 "progress": { 00:15:52.537 "blocks": 20480, 00:15:52.537 "percent": 32 00:15:52.537 } 00:15:52.537 }, 00:15:52.537 "base_bdevs_list": [ 00:15:52.537 { 00:15:52.537 "name": "spare", 00:15:52.537 "uuid": "67dd631e-50ee-5b7b-bdb5-c5dea3a55c62", 00:15:52.537 "is_configured": true, 00:15:52.537 "data_offset": 2048, 00:15:52.537 "data_size": 63488 00:15:52.537 }, 00:15:52.537 { 00:15:52.537 "name": "BaseBdev2", 00:15:52.537 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:52.537 "is_configured": true, 00:15:52.537 "data_offset": 2048, 00:15:52.537 "data_size": 63488 00:15:52.537 } 00:15:52.537 ] 00:15:52.537 }' 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.537 15:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.537 [2024-12-06 15:42:35.787736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.796 [2024-12-06 15:42:35.845130] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:52.796 [2024-12-06 15:42:35.845370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.796 [2024-12-06 15:42:35.845480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.796 [2024-12-06 15:42:35.845500] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.796 "name": "raid_bdev1", 00:15:52.796 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:52.796 "strip_size_kb": 0, 00:15:52.796 "state": "online", 00:15:52.796 "raid_level": "raid1", 00:15:52.796 "superblock": true, 00:15:52.796 "num_base_bdevs": 2, 00:15:52.796 "num_base_bdevs_discovered": 1, 00:15:52.796 "num_base_bdevs_operational": 1, 00:15:52.796 "base_bdevs_list": [ 00:15:52.796 { 00:15:52.796 "name": null, 00:15:52.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.796 "is_configured": false, 00:15:52.796 "data_offset": 0, 00:15:52.796 "data_size": 63488 00:15:52.796 }, 00:15:52.796 { 00:15:52.796 "name": "BaseBdev2", 00:15:52.796 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:52.796 "is_configured": true, 00:15:52.796 "data_offset": 2048, 00:15:52.796 "data_size": 63488 00:15:52.796 } 00:15:52.796 ] 00:15:52.796 }' 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.796 15:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.054 15:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:53.054 15:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.054 15:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:53.054 15:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:53.054 15:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.054 15:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.054 15:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.054 15:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.054 15:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.054 15:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.054 15:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.054 "name": "raid_bdev1", 00:15:53.054 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:53.054 "strip_size_kb": 0, 00:15:53.054 "state": "online", 00:15:53.054 "raid_level": "raid1", 00:15:53.054 "superblock": true, 00:15:53.054 "num_base_bdevs": 2, 00:15:53.054 "num_base_bdevs_discovered": 1, 00:15:53.054 "num_base_bdevs_operational": 1, 00:15:53.054 "base_bdevs_list": [ 00:15:53.054 { 00:15:53.054 "name": null, 00:15:53.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.054 "is_configured": false, 00:15:53.054 "data_offset": 0, 00:15:53.054 "data_size": 63488 00:15:53.054 }, 00:15:53.054 { 00:15:53.054 "name": "BaseBdev2", 00:15:53.054 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:53.054 "is_configured": true, 00:15:53.054 "data_offset": 2048, 00:15:53.054 "data_size": 63488 00:15:53.054 } 00:15:53.054 ] 00:15:53.054 }' 00:15:53.313 15:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.313 15:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.313 15:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.313 15:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.313 15:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:53.313 15:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.313 15:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.313 15:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.314 15:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:53.314 15:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.314 15:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.314 [2024-12-06 15:42:36.436835] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:53.314 [2024-12-06 15:42:36.437025] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.314 [2024-12-06 15:42:36.437097] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:53.314 [2024-12-06 15:42:36.437194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.314 [2024-12-06 15:42:36.437817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.314 [2024-12-06 15:42:36.437937] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:53.314 [2024-12-06 15:42:36.438062] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:53.314 [2024-12-06 15:42:36.438081] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:53.314 [2024-12-06 15:42:36.438096] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:53.314 [2024-12-06 15:42:36.438110] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:53.314 BaseBdev1 00:15:53.314 15:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.314 15:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:54.249 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:54.249 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.249 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.249 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.249 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.249 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:54.249 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.249 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.249 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.249 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.249 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.250 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.250 15:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.250 15:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.250 15:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.250 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.250 "name": "raid_bdev1", 00:15:54.250 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:54.250 "strip_size_kb": 0, 00:15:54.250 "state": "online", 00:15:54.250 "raid_level": "raid1", 00:15:54.250 "superblock": true, 00:15:54.250 "num_base_bdevs": 2, 00:15:54.250 "num_base_bdevs_discovered": 1, 00:15:54.250 "num_base_bdevs_operational": 1, 00:15:54.250 "base_bdevs_list": [ 00:15:54.250 { 00:15:54.250 "name": null, 00:15:54.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.250 "is_configured": false, 00:15:54.250 "data_offset": 0, 00:15:54.250 "data_size": 63488 00:15:54.250 }, 00:15:54.250 { 00:15:54.250 "name": "BaseBdev2", 00:15:54.250 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:54.250 "is_configured": true, 00:15:54.250 "data_offset": 2048, 00:15:54.250 "data_size": 63488 00:15:54.250 } 00:15:54.250 ] 00:15:54.250 }' 00:15:54.250 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.250 15:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.818 "name": "raid_bdev1", 00:15:54.818 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:54.818 "strip_size_kb": 0, 00:15:54.818 "state": "online", 00:15:54.818 "raid_level": "raid1", 00:15:54.818 "superblock": true, 00:15:54.818 "num_base_bdevs": 2, 00:15:54.818 "num_base_bdevs_discovered": 1, 00:15:54.818 "num_base_bdevs_operational": 1, 00:15:54.818 "base_bdevs_list": [ 00:15:54.818 { 00:15:54.818 "name": null, 00:15:54.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.818 "is_configured": false, 00:15:54.818 "data_offset": 0, 00:15:54.818 "data_size": 63488 00:15:54.818 }, 00:15:54.818 { 00:15:54.818 "name": "BaseBdev2", 00:15:54.818 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:54.818 "is_configured": true, 00:15:54.818 "data_offset": 2048, 00:15:54.818 "data_size": 63488 00:15:54.818 } 00:15:54.818 ] 00:15:54.818 }' 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:54.818 15:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:54.818 15:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:54.818 15:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:54.818 15:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:54.818 15:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.818 15:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:54.818 15:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.818 15:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:54.818 15:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.818 15:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.818 [2024-12-06 15:42:38.011699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.818 [2024-12-06 15:42:38.012058] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:54.818 [2024-12-06 15:42:38.012175] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:54.818 request: 00:15:54.818 { 00:15:54.818 "base_bdev": "BaseBdev1", 00:15:54.818 "raid_bdev": "raid_bdev1", 00:15:54.818 "method": "bdev_raid_add_base_bdev", 00:15:54.818 "req_id": 1 00:15:54.818 } 00:15:54.818 Got JSON-RPC error response 00:15:54.818 response: 00:15:54.818 { 00:15:54.818 "code": -22, 00:15:54.818 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:54.818 } 00:15:54.818 15:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:54.818 15:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:54.818 15:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:54.818 15:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:54.818 15:42:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:54.818 15:42:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:55.754 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:55.754 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.754 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.754 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.754 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.754 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:55.754 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.754 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.754 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.754 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.754 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.754 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.754 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.754 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.031 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.031 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.031 "name": "raid_bdev1", 00:15:56.031 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:56.031 "strip_size_kb": 0, 00:15:56.031 "state": "online", 00:15:56.031 "raid_level": "raid1", 00:15:56.031 "superblock": true, 00:15:56.031 "num_base_bdevs": 2, 00:15:56.031 "num_base_bdevs_discovered": 1, 00:15:56.031 "num_base_bdevs_operational": 1, 00:15:56.031 "base_bdevs_list": [ 00:15:56.031 { 00:15:56.031 "name": null, 00:15:56.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.031 "is_configured": false, 00:15:56.031 "data_offset": 0, 00:15:56.031 "data_size": 63488 00:15:56.031 }, 00:15:56.031 { 00:15:56.031 "name": "BaseBdev2", 00:15:56.031 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:56.031 "is_configured": true, 00:15:56.031 "data_offset": 2048, 00:15:56.031 "data_size": 63488 00:15:56.031 } 00:15:56.031 ] 00:15:56.031 }' 00:15:56.031 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.031 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.291 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:56.291 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.291 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:56.291 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:56.291 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.291 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.291 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.291 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.291 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.291 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.291 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.291 "name": "raid_bdev1", 00:15:56.291 "uuid": "0a091dff-373a-4386-8dcb-ad1dab3a0bb7", 00:15:56.291 "strip_size_kb": 0, 00:15:56.291 "state": "online", 00:15:56.291 "raid_level": "raid1", 00:15:56.291 "superblock": true, 00:15:56.291 "num_base_bdevs": 2, 00:15:56.291 "num_base_bdevs_discovered": 1, 00:15:56.291 "num_base_bdevs_operational": 1, 00:15:56.291 "base_bdevs_list": [ 00:15:56.291 { 00:15:56.291 "name": null, 00:15:56.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.291 "is_configured": false, 00:15:56.291 "data_offset": 0, 00:15:56.291 "data_size": 63488 00:15:56.291 }, 00:15:56.291 { 00:15:56.291 "name": "BaseBdev2", 00:15:56.291 "uuid": "5bbd970d-0789-5da1-9b80-af22f02ffb0e", 00:15:56.291 "is_configured": true, 00:15:56.291 "data_offset": 2048, 00:15:56.291 "data_size": 63488 00:15:56.291 } 00:15:56.291 ] 00:15:56.291 }' 00:15:56.291 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.291 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:56.291 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.292 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:56.292 15:42:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75776 00:15:56.292 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75776 ']' 00:15:56.292 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75776 00:15:56.549 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:56.549 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.550 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75776 00:15:56.550 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.550 killing process with pid 75776 00:15:56.550 Received shutdown signal, test time was about 60.000000 seconds 00:15:56.550 00:15:56.550 Latency(us) 00:15:56.550 [2024-12-06T15:42:39.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.550 [2024-12-06T15:42:39.845Z] =================================================================================================================== 00:15:56.550 [2024-12-06T15:42:39.845Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:56.550 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.550 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75776' 00:15:56.550 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75776 00:15:56.550 [2024-12-06 15:42:39.620236] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.550 15:42:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75776 00:15:56.550 [2024-12-06 15:42:39.620392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.550 [2024-12-06 15:42:39.620456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.550 [2024-12-06 15:42:39.620472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:56.807 [2024-12-06 15:42:39.950349] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:58.184 00:15:58.184 real 0m24.011s 00:15:58.184 user 0m28.437s 00:15:58.184 sys 0m4.657s 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.184 ************************************ 00:15:58.184 END TEST raid_rebuild_test_sb 00:15:58.184 ************************************ 00:15:58.184 15:42:41 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:15:58.184 15:42:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:58.184 15:42:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.184 15:42:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.184 ************************************ 00:15:58.184 START TEST raid_rebuild_test_io 00:15:58.184 ************************************ 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76506 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76506 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76506 ']' 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.184 15:42:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.184 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:58.184 Zero copy mechanism will not be used. 00:15:58.184 [2024-12-06 15:42:41.397768] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:15:58.184 [2024-12-06 15:42:41.397918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76506 ] 00:15:58.441 [2024-12-06 15:42:41.585120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.441 [2024-12-06 15:42:41.720846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.699 [2024-12-06 15:42:41.964127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.699 [2024-12-06 15:42:41.964204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.957 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.957 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:15:58.957 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.957 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:58.957 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.957 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.216 BaseBdev1_malloc 00:15:59.216 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.216 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:59.216 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.216 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.216 [2024-12-06 15:42:42.282733] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:59.216 [2024-12-06 15:42:42.282961] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.216 [2024-12-06 15:42:42.283027] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:59.216 [2024-12-06 15:42:42.283115] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.216 [2024-12-06 15:42:42.285859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.216 [2024-12-06 15:42:42.286000] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:59.216 BaseBdev1 00:15:59.216 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.216 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.216 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:59.216 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.216 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.216 BaseBdev2_malloc 00:15:59.216 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.216 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:59.216 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.216 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.216 [2024-12-06 15:42:42.347710] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:59.216 [2024-12-06 15:42:42.347892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.217 [2024-12-06 15:42:42.347931] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:59.217 [2024-12-06 15:42:42.347947] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.217 [2024-12-06 15:42:42.350618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.217 [2024-12-06 15:42:42.350659] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:59.217 BaseBdev2 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.217 spare_malloc 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.217 spare_delay 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.217 [2024-12-06 15:42:42.435269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:59.217 [2024-12-06 15:42:42.435440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.217 [2024-12-06 15:42:42.435497] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:59.217 [2024-12-06 15:42:42.435597] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.217 [2024-12-06 15:42:42.438364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.217 spare 00:15:59.217 [2024-12-06 15:42:42.438511] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.217 [2024-12-06 15:42:42.447406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.217 [2024-12-06 15:42:42.449890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.217 [2024-12-06 15:42:42.450095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:59.217 [2024-12-06 15:42:42.450159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:59.217 [2024-12-06 15:42:42.450551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:59.217 [2024-12-06 15:42:42.450828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:59.217 [2024-12-06 15:42:42.450921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:59.217 [2024-12-06 15:42:42.451181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.217 "name": "raid_bdev1", 00:15:59.217 "uuid": "3163469a-874e-4bef-b92d-ef483ef2c74a", 00:15:59.217 "strip_size_kb": 0, 00:15:59.217 "state": "online", 00:15:59.217 "raid_level": "raid1", 00:15:59.217 "superblock": false, 00:15:59.217 "num_base_bdevs": 2, 00:15:59.217 "num_base_bdevs_discovered": 2, 00:15:59.217 "num_base_bdevs_operational": 2, 00:15:59.217 "base_bdevs_list": [ 00:15:59.217 { 00:15:59.217 "name": "BaseBdev1", 00:15:59.217 "uuid": "c8fe12d6-cc9f-5330-850e-29f9a627a04f", 00:15:59.217 "is_configured": true, 00:15:59.217 "data_offset": 0, 00:15:59.217 "data_size": 65536 00:15:59.217 }, 00:15:59.217 { 00:15:59.217 "name": "BaseBdev2", 00:15:59.217 "uuid": "5412177b-831d-5fa3-814c-fcd4153cbc10", 00:15:59.217 "is_configured": true, 00:15:59.217 "data_offset": 0, 00:15:59.217 "data_size": 65536 00:15:59.217 } 00:15:59.217 ] 00:15:59.217 }' 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.217 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.783 [2024-12-06 15:42:42.899172] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.783 [2024-12-06 15:42:42.986702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.783 15:42:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.783 15:42:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.783 15:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.783 "name": "raid_bdev1", 00:15:59.783 "uuid": "3163469a-874e-4bef-b92d-ef483ef2c74a", 00:15:59.783 "strip_size_kb": 0, 00:15:59.783 "state": "online", 00:15:59.783 "raid_level": "raid1", 00:15:59.783 "superblock": false, 00:15:59.783 "num_base_bdevs": 2, 00:15:59.783 "num_base_bdevs_discovered": 1, 00:15:59.783 "num_base_bdevs_operational": 1, 00:15:59.783 "base_bdevs_list": [ 00:15:59.783 { 00:15:59.783 "name": null, 00:15:59.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.783 "is_configured": false, 00:15:59.783 "data_offset": 0, 00:15:59.783 "data_size": 65536 00:15:59.783 }, 00:15:59.783 { 00:15:59.783 "name": "BaseBdev2", 00:15:59.783 "uuid": "5412177b-831d-5fa3-814c-fcd4153cbc10", 00:15:59.783 "is_configured": true, 00:15:59.783 "data_offset": 0, 00:15:59.783 "data_size": 65536 00:15:59.783 } 00:15:59.783 ] 00:15:59.783 }' 00:15:59.783 15:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.783 15:42:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.057 [2024-12-06 15:42:43.084797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:00.057 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:00.057 Zero copy mechanism will not be used. 00:16:00.057 Running I/O for 60 seconds... 00:16:00.315 15:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:00.315 15:42:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.315 15:42:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.315 [2024-12-06 15:42:43.458446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.315 15:42:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.315 15:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:00.315 [2024-12-06 15:42:43.525848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:00.315 [2024-12-06 15:42:43.528388] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:00.573 [2024-12-06 15:42:43.655058] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:00.573 [2024-12-06 15:42:43.655837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:00.573 [2024-12-06 15:42:43.863936] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:00.573 [2024-12-06 15:42:43.864187] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:01.088 177.00 IOPS, 531.00 MiB/s [2024-12-06T15:42:44.383Z] [2024-12-06 15:42:44.183569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:01.088 [2024-12-06 15:42:44.184304] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:01.346 [2024-12-06 15:42:44.388117] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:01.346 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.346 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.346 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.346 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.346 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.346 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.346 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.346 15:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.346 15:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.346 15:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.346 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.346 "name": "raid_bdev1", 00:16:01.346 "uuid": "3163469a-874e-4bef-b92d-ef483ef2c74a", 00:16:01.346 "strip_size_kb": 0, 00:16:01.346 "state": "online", 00:16:01.346 "raid_level": "raid1", 00:16:01.346 "superblock": false, 00:16:01.346 "num_base_bdevs": 2, 00:16:01.346 "num_base_bdevs_discovered": 2, 00:16:01.346 "num_base_bdevs_operational": 2, 00:16:01.346 "process": { 00:16:01.346 "type": "rebuild", 00:16:01.346 "target": "spare", 00:16:01.346 "progress": { 00:16:01.346 "blocks": 12288, 00:16:01.346 "percent": 18 00:16:01.346 } 00:16:01.346 }, 00:16:01.346 "base_bdevs_list": [ 00:16:01.346 { 00:16:01.346 "name": "spare", 00:16:01.346 "uuid": "bb703cf1-031b-57bb-992e-d5706ab75686", 00:16:01.346 "is_configured": true, 00:16:01.346 "data_offset": 0, 00:16:01.346 "data_size": 65536 00:16:01.346 }, 00:16:01.346 { 00:16:01.346 "name": "BaseBdev2", 00:16:01.346 "uuid": "5412177b-831d-5fa3-814c-fcd4153cbc10", 00:16:01.346 "is_configured": true, 00:16:01.346 "data_offset": 0, 00:16:01.346 "data_size": 65536 00:16:01.346 } 00:16:01.346 ] 00:16:01.346 }' 00:16:01.346 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.346 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.346 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.346 [2024-12-06 15:42:44.615472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:01.346 [2024-12-06 15:42:44.616198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:01.604 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.604 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:01.604 15:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.604 15:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.604 [2024-12-06 15:42:44.648876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.604 [2024-12-06 15:42:44.737429] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:01.604 [2024-12-06 15:42:44.739606] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:01.604 [2024-12-06 15:42:44.748133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.604 [2024-12-06 15:42:44.748172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.604 [2024-12-06 15:42:44.748190] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:01.605 [2024-12-06 15:42:44.793636] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.605 "name": "raid_bdev1", 00:16:01.605 "uuid": "3163469a-874e-4bef-b92d-ef483ef2c74a", 00:16:01.605 "strip_size_kb": 0, 00:16:01.605 "state": "online", 00:16:01.605 "raid_level": "raid1", 00:16:01.605 "superblock": false, 00:16:01.605 "num_base_bdevs": 2, 00:16:01.605 "num_base_bdevs_discovered": 1, 00:16:01.605 "num_base_bdevs_operational": 1, 00:16:01.605 "base_bdevs_list": [ 00:16:01.605 { 00:16:01.605 "name": null, 00:16:01.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.605 "is_configured": false, 00:16:01.605 "data_offset": 0, 00:16:01.605 "data_size": 65536 00:16:01.605 }, 00:16:01.605 { 00:16:01.605 "name": "BaseBdev2", 00:16:01.605 "uuid": "5412177b-831d-5fa3-814c-fcd4153cbc10", 00:16:01.605 "is_configured": true, 00:16:01.605 "data_offset": 0, 00:16:01.605 "data_size": 65536 00:16:01.605 } 00:16:01.605 ] 00:16:01.605 }' 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.605 15:42:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.122 187.00 IOPS, 561.00 MiB/s [2024-12-06T15:42:45.417Z] 15:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.122 "name": "raid_bdev1", 00:16:02.122 "uuid": "3163469a-874e-4bef-b92d-ef483ef2c74a", 00:16:02.122 "strip_size_kb": 0, 00:16:02.122 "state": "online", 00:16:02.122 "raid_level": "raid1", 00:16:02.122 "superblock": false, 00:16:02.122 "num_base_bdevs": 2, 00:16:02.122 "num_base_bdevs_discovered": 1, 00:16:02.122 "num_base_bdevs_operational": 1, 00:16:02.122 "base_bdevs_list": [ 00:16:02.122 { 00:16:02.122 "name": null, 00:16:02.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.122 "is_configured": false, 00:16:02.122 "data_offset": 0, 00:16:02.122 "data_size": 65536 00:16:02.122 }, 00:16:02.122 { 00:16:02.122 "name": "BaseBdev2", 00:16:02.122 "uuid": "5412177b-831d-5fa3-814c-fcd4153cbc10", 00:16:02.122 "is_configured": true, 00:16:02.122 "data_offset": 0, 00:16:02.122 "data_size": 65536 00:16:02.122 } 00:16:02.122 ] 00:16:02.122 }' 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.122 15:42:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.122 [2024-12-06 15:42:45.400886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.380 15:42:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.380 15:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:02.380 [2024-12-06 15:42:45.464872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:02.380 [2024-12-06 15:42:45.467378] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:02.380 [2024-12-06 15:42:45.581654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:02.380 [2024-12-06 15:42:45.582184] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:02.640 [2024-12-06 15:42:45.790495] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:02.640 [2024-12-06 15:42:45.790762] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:02.906 199.00 IOPS, 597.00 MiB/s [2024-12-06T15:42:46.201Z] [2024-12-06 15:42:46.147128] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:02.906 [2024-12-06 15:42:46.148047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:03.164 [2024-12-06 15:42:46.263656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:03.164 [2024-12-06 15:42:46.264069] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:03.164 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.164 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.164 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.164 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.164 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.164 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.164 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.164 15:42:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.422 15:42:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.422 15:42:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.422 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.422 "name": "raid_bdev1", 00:16:03.422 "uuid": "3163469a-874e-4bef-b92d-ef483ef2c74a", 00:16:03.422 "strip_size_kb": 0, 00:16:03.422 "state": "online", 00:16:03.422 "raid_level": "raid1", 00:16:03.422 "superblock": false, 00:16:03.422 "num_base_bdevs": 2, 00:16:03.422 "num_base_bdevs_discovered": 2, 00:16:03.422 "num_base_bdevs_operational": 2, 00:16:03.422 "process": { 00:16:03.422 "type": "rebuild", 00:16:03.422 "target": "spare", 00:16:03.422 "progress": { 00:16:03.422 "blocks": 12288, 00:16:03.422 "percent": 18 00:16:03.422 } 00:16:03.422 }, 00:16:03.422 "base_bdevs_list": [ 00:16:03.422 { 00:16:03.422 "name": "spare", 00:16:03.422 "uuid": "bb703cf1-031b-57bb-992e-d5706ab75686", 00:16:03.422 "is_configured": true, 00:16:03.422 "data_offset": 0, 00:16:03.422 "data_size": 65536 00:16:03.422 }, 00:16:03.422 { 00:16:03.422 "name": "BaseBdev2", 00:16:03.422 "uuid": "5412177b-831d-5fa3-814c-fcd4153cbc10", 00:16:03.422 "is_configured": true, 00:16:03.422 "data_offset": 0, 00:16:03.422 "data_size": 65536 00:16:03.422 } 00:16:03.422 ] 00:16:03.422 }' 00:16:03.422 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.422 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.422 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.422 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=414 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.423 [2024-12-06 15:42:46.629190] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.423 "name": "raid_bdev1", 00:16:03.423 "uuid": "3163469a-874e-4bef-b92d-ef483ef2c74a", 00:16:03.423 "strip_size_kb": 0, 00:16:03.423 "state": "online", 00:16:03.423 "raid_level": "raid1", 00:16:03.423 "superblock": false, 00:16:03.423 "num_base_bdevs": 2, 00:16:03.423 "num_base_bdevs_discovered": 2, 00:16:03.423 "num_base_bdevs_operational": 2, 00:16:03.423 "process": { 00:16:03.423 "type": "rebuild", 00:16:03.423 "target": "spare", 00:16:03.423 "progress": { 00:16:03.423 "blocks": 14336, 00:16:03.423 "percent": 21 00:16:03.423 } 00:16:03.423 }, 00:16:03.423 "base_bdevs_list": [ 00:16:03.423 { 00:16:03.423 "name": "spare", 00:16:03.423 "uuid": "bb703cf1-031b-57bb-992e-d5706ab75686", 00:16:03.423 "is_configured": true, 00:16:03.423 "data_offset": 0, 00:16:03.423 "data_size": 65536 00:16:03.423 }, 00:16:03.423 { 00:16:03.423 "name": "BaseBdev2", 00:16:03.423 "uuid": "5412177b-831d-5fa3-814c-fcd4153cbc10", 00:16:03.423 "is_configured": true, 00:16:03.423 "data_offset": 0, 00:16:03.423 "data_size": 65536 00:16:03.423 } 00:16:03.423 ] 00:16:03.423 }' 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.423 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.681 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.681 15:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.507 167.75 IOPS, 503.25 MiB/s [2024-12-06T15:42:47.802Z] 15:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.507 15:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.507 15:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.507 15:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.507 15:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.507 15:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.507 15:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.507 15:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.507 15:42:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.507 15:42:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.507 [2024-12-06 15:42:47.762596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:04.507 [2024-12-06 15:42:47.763101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:04.507 15:42:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.507 15:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.507 "name": "raid_bdev1", 00:16:04.507 "uuid": "3163469a-874e-4bef-b92d-ef483ef2c74a", 00:16:04.507 "strip_size_kb": 0, 00:16:04.507 "state": "online", 00:16:04.507 "raid_level": "raid1", 00:16:04.507 "superblock": false, 00:16:04.507 "num_base_bdevs": 2, 00:16:04.507 "num_base_bdevs_discovered": 2, 00:16:04.507 "num_base_bdevs_operational": 2, 00:16:04.507 "process": { 00:16:04.507 "type": "rebuild", 00:16:04.507 "target": "spare", 00:16:04.507 "progress": { 00:16:04.507 "blocks": 32768, 00:16:04.507 "percent": 50 00:16:04.507 } 00:16:04.507 }, 00:16:04.507 "base_bdevs_list": [ 00:16:04.507 { 00:16:04.507 "name": "spare", 00:16:04.507 "uuid": "bb703cf1-031b-57bb-992e-d5706ab75686", 00:16:04.507 "is_configured": true, 00:16:04.507 "data_offset": 0, 00:16:04.507 "data_size": 65536 00:16:04.507 }, 00:16:04.507 { 00:16:04.507 "name": "BaseBdev2", 00:16:04.507 "uuid": "5412177b-831d-5fa3-814c-fcd4153cbc10", 00:16:04.507 "is_configured": true, 00:16:04.507 "data_offset": 0, 00:16:04.507 "data_size": 65536 00:16:04.507 } 00:16:04.507 ] 00:16:04.507 }' 00:16:04.507 15:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.767 15:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.767 15:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.767 15:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.767 15:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.767 [2024-12-06 15:42:47.995964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:04.767 [2024-12-06 15:42:47.996726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:05.026 146.40 IOPS, 439.20 MiB/s [2024-12-06T15:42:48.321Z] [2024-12-06 15:42:48.235006] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:05.285 [2024-12-06 15:42:48.485774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:05.853 15:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.853 15:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.853 15:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.853 15:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.853 15:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.853 15:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.853 15:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.853 15:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.853 15:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.853 15:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.853 [2024-12-06 15:42:48.914280] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:05.854 15:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.854 15:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.854 "name": "raid_bdev1", 00:16:05.854 "uuid": "3163469a-874e-4bef-b92d-ef483ef2c74a", 00:16:05.854 "strip_size_kb": 0, 00:16:05.854 "state": "online", 00:16:05.854 "raid_level": "raid1", 00:16:05.854 "superblock": false, 00:16:05.854 "num_base_bdevs": 2, 00:16:05.854 "num_base_bdevs_discovered": 2, 00:16:05.854 "num_base_bdevs_operational": 2, 00:16:05.854 "process": { 00:16:05.854 "type": "rebuild", 00:16:05.854 "target": "spare", 00:16:05.854 "progress": { 00:16:05.854 "blocks": 51200, 00:16:05.854 "percent": 78 00:16:05.854 } 00:16:05.854 }, 00:16:05.854 "base_bdevs_list": [ 00:16:05.854 { 00:16:05.854 "name": "spare", 00:16:05.854 "uuid": "bb703cf1-031b-57bb-992e-d5706ab75686", 00:16:05.854 "is_configured": true, 00:16:05.854 "data_offset": 0, 00:16:05.854 "data_size": 65536 00:16:05.854 }, 00:16:05.854 { 00:16:05.854 "name": "BaseBdev2", 00:16:05.854 "uuid": "5412177b-831d-5fa3-814c-fcd4153cbc10", 00:16:05.854 "is_configured": true, 00:16:05.854 "data_offset": 0, 00:16:05.854 "data_size": 65536 00:16:05.854 } 00:16:05.854 ] 00:16:05.854 }' 00:16:05.854 15:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.854 15:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.854 15:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.854 15:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.854 15:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:06.422 129.67 IOPS, 389.00 MiB/s [2024-12-06T15:42:49.717Z] [2024-12-06 15:42:49.670491] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:06.681 [2024-12-06 15:42:49.770409] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:06.681 [2024-12-06 15:42:49.772441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.939 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.939 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.940 116.43 IOPS, 349.29 MiB/s [2024-12-06T15:42:50.235Z] 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.940 "name": "raid_bdev1", 00:16:06.940 "uuid": "3163469a-874e-4bef-b92d-ef483ef2c74a", 00:16:06.940 "strip_size_kb": 0, 00:16:06.940 "state": "online", 00:16:06.940 "raid_level": "raid1", 00:16:06.940 "superblock": false, 00:16:06.940 "num_base_bdevs": 2, 00:16:06.940 "num_base_bdevs_discovered": 2, 00:16:06.940 "num_base_bdevs_operational": 2, 00:16:06.940 "base_bdevs_list": [ 00:16:06.940 { 00:16:06.940 "name": "spare", 00:16:06.940 "uuid": "bb703cf1-031b-57bb-992e-d5706ab75686", 00:16:06.940 "is_configured": true, 00:16:06.940 "data_offset": 0, 00:16:06.940 "data_size": 65536 00:16:06.940 }, 00:16:06.940 { 00:16:06.940 "name": "BaseBdev2", 00:16:06.940 "uuid": "5412177b-831d-5fa3-814c-fcd4153cbc10", 00:16:06.940 "is_configured": true, 00:16:06.940 "data_offset": 0, 00:16:06.940 "data_size": 65536 00:16:06.940 } 00:16:06.940 ] 00:16:06.940 }' 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.940 "name": "raid_bdev1", 00:16:06.940 "uuid": "3163469a-874e-4bef-b92d-ef483ef2c74a", 00:16:06.940 "strip_size_kb": 0, 00:16:06.940 "state": "online", 00:16:06.940 "raid_level": "raid1", 00:16:06.940 "superblock": false, 00:16:06.940 "num_base_bdevs": 2, 00:16:06.940 "num_base_bdevs_discovered": 2, 00:16:06.940 "num_base_bdevs_operational": 2, 00:16:06.940 "base_bdevs_list": [ 00:16:06.940 { 00:16:06.940 "name": "spare", 00:16:06.940 "uuid": "bb703cf1-031b-57bb-992e-d5706ab75686", 00:16:06.940 "is_configured": true, 00:16:06.940 "data_offset": 0, 00:16:06.940 "data_size": 65536 00:16:06.940 }, 00:16:06.940 { 00:16:06.940 "name": "BaseBdev2", 00:16:06.940 "uuid": "5412177b-831d-5fa3-814c-fcd4153cbc10", 00:16:06.940 "is_configured": true, 00:16:06.940 "data_offset": 0, 00:16:06.940 "data_size": 65536 00:16:06.940 } 00:16:06.940 ] 00:16:06.940 }' 00:16:06.940 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.200 "name": "raid_bdev1", 00:16:07.200 "uuid": "3163469a-874e-4bef-b92d-ef483ef2c74a", 00:16:07.200 "strip_size_kb": 0, 00:16:07.200 "state": "online", 00:16:07.200 "raid_level": "raid1", 00:16:07.200 "superblock": false, 00:16:07.200 "num_base_bdevs": 2, 00:16:07.200 "num_base_bdevs_discovered": 2, 00:16:07.200 "num_base_bdevs_operational": 2, 00:16:07.200 "base_bdevs_list": [ 00:16:07.200 { 00:16:07.200 "name": "spare", 00:16:07.200 "uuid": "bb703cf1-031b-57bb-992e-d5706ab75686", 00:16:07.200 "is_configured": true, 00:16:07.200 "data_offset": 0, 00:16:07.200 "data_size": 65536 00:16:07.200 }, 00:16:07.200 { 00:16:07.200 "name": "BaseBdev2", 00:16:07.200 "uuid": "5412177b-831d-5fa3-814c-fcd4153cbc10", 00:16:07.200 "is_configured": true, 00:16:07.200 "data_offset": 0, 00:16:07.200 "data_size": 65536 00:16:07.200 } 00:16:07.200 ] 00:16:07.200 }' 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.200 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.459 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:07.460 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.460 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.460 [2024-12-06 15:42:50.722877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.460 [2024-12-06 15:42:50.723035] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.719 00:16:07.719 Latency(us) 00:16:07.719 [2024-12-06T15:42:51.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.719 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:07.719 raid_bdev1 : 7.71 108.41 325.23 0.00 0.00 12046.42 296.10 108647.63 00:16:07.719 [2024-12-06T15:42:51.014Z] =================================================================================================================== 00:16:07.719 [2024-12-06T15:42:51.014Z] Total : 108.41 325.23 0.00 0.00 12046.42 296.10 108647.63 00:16:07.719 [2024-12-06 15:42:50.809006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.719 [2024-12-06 15:42:50.809181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.719 [2024-12-06 15:42:50.809305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.720 [2024-12-06 15:42:50.809421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:07.720 { 00:16:07.720 "results": [ 00:16:07.720 { 00:16:07.720 "job": "raid_bdev1", 00:16:07.720 "core_mask": "0x1", 00:16:07.720 "workload": "randrw", 00:16:07.720 "percentage": 50, 00:16:07.720 "status": "finished", 00:16:07.720 "queue_depth": 2, 00:16:07.720 "io_size": 3145728, 00:16:07.720 "runtime": 7.711544, 00:16:07.720 "iops": 108.40889969635134, 00:16:07.720 "mibps": 325.226699089054, 00:16:07.720 "io_failed": 0, 00:16:07.720 "io_timeout": 0, 00:16:07.720 "avg_latency_us": 12046.423089487136, 00:16:07.720 "min_latency_us": 296.09638554216866, 00:16:07.720 "max_latency_us": 108647.63373493977 00:16:07.720 } 00:16:07.720 ], 00:16:07.720 "core_count": 1 00:16:07.720 } 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:07.720 15:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:07.979 /dev/nbd0 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.979 1+0 records in 00:16:07.979 1+0 records out 00:16:07.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414323 s, 9.9 MB/s 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:07.979 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:08.239 /dev/nbd1 00:16:08.239 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:08.239 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:08.239 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:08.239 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:08.239 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:08.239 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:08.239 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:08.239 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:08.239 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:08.239 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:08.239 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:08.239 1+0 records in 00:16:08.239 1+0 records out 00:16:08.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408621 s, 10.0 MB/s 00:16:08.240 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.240 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:08.240 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.240 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:08.240 15:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:08.240 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:08.240 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:08.240 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:08.499 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:08.499 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:08.499 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:08.499 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:08.499 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:08.499 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.499 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:08.758 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:08.758 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:08.758 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:08.758 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.758 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.758 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:08.758 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:08.758 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.758 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:08.758 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:08.758 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:08.758 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:08.758 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:08.758 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.758 15:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:08.758 15:42:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:08.758 15:42:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:08.758 15:42:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:08.758 15:42:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.758 15:42:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.758 15:42:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:08.758 15:42:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:08.758 15:42:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.758 15:42:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:08.758 15:42:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76506 00:16:08.758 15:42:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76506 ']' 00:16:08.758 15:42:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76506 00:16:08.758 15:42:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:16:09.017 15:42:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.017 15:42:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76506 00:16:09.017 15:42:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:09.017 15:42:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:09.017 15:42:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76506' 00:16:09.017 killing process with pid 76506 00:16:09.017 15:42:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76506 00:16:09.017 Received shutdown signal, test time was about 9.020258 seconds 00:16:09.017 00:16:09.017 Latency(us) 00:16:09.017 [2024-12-06T15:42:52.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.017 [2024-12-06T15:42:52.312Z] =================================================================================================================== 00:16:09.017 [2024-12-06T15:42:52.312Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:09.017 [2024-12-06 15:42:52.093387] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:09.017 15:42:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76506 00:16:09.276 [2024-12-06 15:42:52.348484] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:10.654 15:42:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:10.654 00:16:10.654 real 0m12.388s 00:16:10.654 user 0m15.307s 00:16:10.654 sys 0m1.878s 00:16:10.654 15:42:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.654 15:42:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.654 ************************************ 00:16:10.654 END TEST raid_rebuild_test_io 00:16:10.654 ************************************ 00:16:10.654 15:42:53 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:16:10.654 15:42:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:10.654 15:42:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.654 15:42:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:10.654 ************************************ 00:16:10.655 START TEST raid_rebuild_test_sb_io 00:16:10.655 ************************************ 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76893 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76893 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76893 ']' 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.655 15:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.655 [2024-12-06 15:42:53.863092] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:16:10.655 [2024-12-06 15:42:53.863246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:10.655 Zero copy mechanism will not be used. 00:16:10.655 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76893 ] 00:16:10.919 [2024-12-06 15:42:54.046942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.919 [2024-12-06 15:42:54.183297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.187 [2024-12-06 15:42:54.426494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.188 [2024-12-06 15:42:54.426578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.447 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.447 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:16:11.447 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:11.447 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:11.447 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.447 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.708 BaseBdev1_malloc 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.708 [2024-12-06 15:42:54.750982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:11.708 [2024-12-06 15:42:54.751058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.708 [2024-12-06 15:42:54.751088] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:11.708 [2024-12-06 15:42:54.751104] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.708 [2024-12-06 15:42:54.753819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.708 [2024-12-06 15:42:54.753862] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:11.708 BaseBdev1 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.708 BaseBdev2_malloc 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.708 [2024-12-06 15:42:54.814308] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:11.708 [2024-12-06 15:42:54.814379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.708 [2024-12-06 15:42:54.814407] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:11.708 [2024-12-06 15:42:54.814422] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.708 [2024-12-06 15:42:54.817155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.708 [2024-12-06 15:42:54.817198] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:11.708 BaseBdev2 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.708 spare_malloc 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.708 spare_delay 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.708 [2024-12-06 15:42:54.900735] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:11.708 [2024-12-06 15:42:54.900801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.708 [2024-12-06 15:42:54.900823] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:11.708 [2024-12-06 15:42:54.900839] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.708 [2024-12-06 15:42:54.903546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.708 [2024-12-06 15:42:54.903589] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:11.708 spare 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.708 [2024-12-06 15:42:54.912791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.708 [2024-12-06 15:42:54.915131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.708 [2024-12-06 15:42:54.915329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:11.708 [2024-12-06 15:42:54.915346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:11.708 [2024-12-06 15:42:54.915622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:11.708 [2024-12-06 15:42:54.915792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:11.708 [2024-12-06 15:42:54.915803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:11.708 [2024-12-06 15:42:54.915952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.708 "name": "raid_bdev1", 00:16:11.708 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:11.708 "strip_size_kb": 0, 00:16:11.708 "state": "online", 00:16:11.708 "raid_level": "raid1", 00:16:11.708 "superblock": true, 00:16:11.708 "num_base_bdevs": 2, 00:16:11.708 "num_base_bdevs_discovered": 2, 00:16:11.708 "num_base_bdevs_operational": 2, 00:16:11.708 "base_bdevs_list": [ 00:16:11.708 { 00:16:11.708 "name": "BaseBdev1", 00:16:11.708 "uuid": "4b41062c-e47d-5c9c-bb3d-4fba829afdb8", 00:16:11.708 "is_configured": true, 00:16:11.708 "data_offset": 2048, 00:16:11.708 "data_size": 63488 00:16:11.708 }, 00:16:11.708 { 00:16:11.708 "name": "BaseBdev2", 00:16:11.708 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:11.708 "is_configured": true, 00:16:11.708 "data_offset": 2048, 00:16:11.708 "data_size": 63488 00:16:11.708 } 00:16:11.708 ] 00:16:11.708 }' 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.708 15:42:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.279 [2024-12-06 15:42:55.340950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:12.279 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.280 [2024-12-06 15:42:55.428635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.280 "name": "raid_bdev1", 00:16:12.280 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:12.280 "strip_size_kb": 0, 00:16:12.280 "state": "online", 00:16:12.280 "raid_level": "raid1", 00:16:12.280 "superblock": true, 00:16:12.280 "num_base_bdevs": 2, 00:16:12.280 "num_base_bdevs_discovered": 1, 00:16:12.280 "num_base_bdevs_operational": 1, 00:16:12.280 "base_bdevs_list": [ 00:16:12.280 { 00:16:12.280 "name": null, 00:16:12.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.280 "is_configured": false, 00:16:12.280 "data_offset": 0, 00:16:12.280 "data_size": 63488 00:16:12.280 }, 00:16:12.280 { 00:16:12.280 "name": "BaseBdev2", 00:16:12.280 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:12.280 "is_configured": true, 00:16:12.280 "data_offset": 2048, 00:16:12.280 "data_size": 63488 00:16:12.280 } 00:16:12.280 ] 00:16:12.280 }' 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.280 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.280 [2024-12-06 15:42:55.530511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:12.280 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:12.280 Zero copy mechanism will not be used. 00:16:12.280 Running I/O for 60 seconds... 00:16:12.845 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:12.845 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.845 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.845 [2024-12-06 15:42:55.847548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.845 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.845 15:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:12.845 [2024-12-06 15:42:55.898813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:12.845 [2024-12-06 15:42:55.901286] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:12.846 [2024-12-06 15:42:56.008822] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:12.846 [2024-12-06 15:42:56.009611] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:13.104 [2024-12-06 15:42:56.227804] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:13.104 [2024-12-06 15:42:56.228068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:13.362 175.00 IOPS, 525.00 MiB/s [2024-12-06T15:42:56.657Z] [2024-12-06 15:42:56.546928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:13.362 [2024-12-06 15:42:56.547705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:13.621 [2024-12-06 15:42:56.757286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:13.621 [2024-12-06 15:42:56.757697] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:13.621 15:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.621 15:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.621 15:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.621 15:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.621 15:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.621 15:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.621 15:42:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.621 15:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.621 15:42:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.880 15:42:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.880 15:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.880 "name": "raid_bdev1", 00:16:13.880 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:13.880 "strip_size_kb": 0, 00:16:13.880 "state": "online", 00:16:13.880 "raid_level": "raid1", 00:16:13.880 "superblock": true, 00:16:13.880 "num_base_bdevs": 2, 00:16:13.880 "num_base_bdevs_discovered": 2, 00:16:13.880 "num_base_bdevs_operational": 2, 00:16:13.880 "process": { 00:16:13.880 "type": "rebuild", 00:16:13.880 "target": "spare", 00:16:13.880 "progress": { 00:16:13.880 "blocks": 12288, 00:16:13.880 "percent": 19 00:16:13.880 } 00:16:13.880 }, 00:16:13.880 "base_bdevs_list": [ 00:16:13.880 { 00:16:13.880 "name": "spare", 00:16:13.880 "uuid": "d18e48a2-37b8-5a0f-bf9a-e68b55ea2fbf", 00:16:13.880 "is_configured": true, 00:16:13.880 "data_offset": 2048, 00:16:13.880 "data_size": 63488 00:16:13.880 }, 00:16:13.880 { 00:16:13.880 "name": "BaseBdev2", 00:16:13.880 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:13.880 "is_configured": true, 00:16:13.880 "data_offset": 2048, 00:16:13.880 "data_size": 63488 00:16:13.880 } 00:16:13.880 ] 00:16:13.880 }' 00:16:13.880 15:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.880 15:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.880 15:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.880 [2024-12-06 15:42:57.002566] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:13.880 [2024-12-06 15:42:57.003332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:13.880 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.880 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:13.880 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.880 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.880 [2024-12-06 15:42:57.034367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.881 [2024-12-06 15:42:57.112774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:14.140 [2024-12-06 15:42:57.231643] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:14.140 [2024-12-06 15:42:57.241039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.140 [2024-12-06 15:42:57.241092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.140 [2024-12-06 15:42:57.241107] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:14.140 [2024-12-06 15:42:57.287129] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.140 "name": "raid_bdev1", 00:16:14.140 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:14.140 "strip_size_kb": 0, 00:16:14.140 "state": "online", 00:16:14.140 "raid_level": "raid1", 00:16:14.140 "superblock": true, 00:16:14.140 "num_base_bdevs": 2, 00:16:14.140 "num_base_bdevs_discovered": 1, 00:16:14.140 "num_base_bdevs_operational": 1, 00:16:14.140 "base_bdevs_list": [ 00:16:14.140 { 00:16:14.140 "name": null, 00:16:14.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.140 "is_configured": false, 00:16:14.140 "data_offset": 0, 00:16:14.140 "data_size": 63488 00:16:14.140 }, 00:16:14.140 { 00:16:14.140 "name": "BaseBdev2", 00:16:14.140 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:14.140 "is_configured": true, 00:16:14.140 "data_offset": 2048, 00:16:14.140 "data_size": 63488 00:16:14.140 } 00:16:14.140 ] 00:16:14.140 }' 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.140 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.657 154.00 IOPS, 462.00 MiB/s [2024-12-06T15:42:57.952Z] 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.657 "name": "raid_bdev1", 00:16:14.657 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:14.657 "strip_size_kb": 0, 00:16:14.657 "state": "online", 00:16:14.657 "raid_level": "raid1", 00:16:14.657 "superblock": true, 00:16:14.657 "num_base_bdevs": 2, 00:16:14.657 "num_base_bdevs_discovered": 1, 00:16:14.657 "num_base_bdevs_operational": 1, 00:16:14.657 "base_bdevs_list": [ 00:16:14.657 { 00:16:14.657 "name": null, 00:16:14.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.657 "is_configured": false, 00:16:14.657 "data_offset": 0, 00:16:14.657 "data_size": 63488 00:16:14.657 }, 00:16:14.657 { 00:16:14.657 "name": "BaseBdev2", 00:16:14.657 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:14.657 "is_configured": true, 00:16:14.657 "data_offset": 2048, 00:16:14.657 "data_size": 63488 00:16:14.657 } 00:16:14.657 ] 00:16:14.657 }' 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.657 [2024-12-06 15:42:57.862588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.657 15:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:14.657 [2024-12-06 15:42:57.922146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:14.657 [2024-12-06 15:42:57.924651] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.916 [2024-12-06 15:42:58.033530] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:14.916 [2024-12-06 15:42:58.034317] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:15.175 [2024-12-06 15:42:58.269936] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:15.175 [2024-12-06 15:42:58.270371] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:15.434 160.67 IOPS, 482.00 MiB/s [2024-12-06T15:42:58.729Z] [2024-12-06 15:42:58.726374] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:15.693 15:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.693 15:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.693 15:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.693 15:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.693 15:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.693 15:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.693 15:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.693 15:42:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.693 15:42:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.693 15:42:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.693 15:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.693 "name": "raid_bdev1", 00:16:15.693 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:15.693 "strip_size_kb": 0, 00:16:15.693 "state": "online", 00:16:15.693 "raid_level": "raid1", 00:16:15.693 "superblock": true, 00:16:15.694 "num_base_bdevs": 2, 00:16:15.694 "num_base_bdevs_discovered": 2, 00:16:15.694 "num_base_bdevs_operational": 2, 00:16:15.694 "process": { 00:16:15.694 "type": "rebuild", 00:16:15.694 "target": "spare", 00:16:15.694 "progress": { 00:16:15.694 "blocks": 12288, 00:16:15.694 "percent": 19 00:16:15.694 } 00:16:15.694 }, 00:16:15.694 "base_bdevs_list": [ 00:16:15.694 { 00:16:15.694 "name": "spare", 00:16:15.694 "uuid": "d18e48a2-37b8-5a0f-bf9a-e68b55ea2fbf", 00:16:15.694 "is_configured": true, 00:16:15.694 "data_offset": 2048, 00:16:15.694 "data_size": 63488 00:16:15.694 }, 00:16:15.694 { 00:16:15.694 "name": "BaseBdev2", 00:16:15.694 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:15.694 "is_configured": true, 00:16:15.694 "data_offset": 2048, 00:16:15.694 "data_size": 63488 00:16:15.694 } 00:16:15.694 ] 00:16:15.694 }' 00:16:15.694 15:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.694 [2024-12-06 15:42:58.974348] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:15.953 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=427 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.953 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.953 "name": "raid_bdev1", 00:16:15.953 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:15.953 "strip_size_kb": 0, 00:16:15.953 "state": "online", 00:16:15.953 "raid_level": "raid1", 00:16:15.953 "superblock": true, 00:16:15.953 "num_base_bdevs": 2, 00:16:15.953 "num_base_bdevs_discovered": 2, 00:16:15.953 "num_base_bdevs_operational": 2, 00:16:15.954 "process": { 00:16:15.954 "type": "rebuild", 00:16:15.954 "target": "spare", 00:16:15.954 "progress": { 00:16:15.954 "blocks": 14336, 00:16:15.954 "percent": 22 00:16:15.954 } 00:16:15.954 }, 00:16:15.954 "base_bdevs_list": [ 00:16:15.954 { 00:16:15.954 "name": "spare", 00:16:15.954 "uuid": "d18e48a2-37b8-5a0f-bf9a-e68b55ea2fbf", 00:16:15.954 "is_configured": true, 00:16:15.954 "data_offset": 2048, 00:16:15.954 "data_size": 63488 00:16:15.954 }, 00:16:15.954 { 00:16:15.954 "name": "BaseBdev2", 00:16:15.954 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:15.954 "is_configured": true, 00:16:15.954 "data_offset": 2048, 00:16:15.954 "data_size": 63488 00:16:15.954 } 00:16:15.954 ] 00:16:15.954 }' 00:16:15.954 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.954 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.954 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.954 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.954 15:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.954 [2024-12-06 15:42:59.194101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:15.954 [2024-12-06 15:42:59.207313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:16.521 148.75 IOPS, 446.25 MiB/s [2024-12-06T15:42:59.816Z] [2024-12-06 15:42:59.545142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:16.521 [2024-12-06 15:42:59.647144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:16.780 [2024-12-06 15:42:59.888721] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:17.040 [2024-12-06 15:43:00.105472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.040 "name": "raid_bdev1", 00:16:17.040 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:17.040 "strip_size_kb": 0, 00:16:17.040 "state": "online", 00:16:17.040 "raid_level": "raid1", 00:16:17.040 "superblock": true, 00:16:17.040 "num_base_bdevs": 2, 00:16:17.040 "num_base_bdevs_discovered": 2, 00:16:17.040 "num_base_bdevs_operational": 2, 00:16:17.040 "process": { 00:16:17.040 "type": "rebuild", 00:16:17.040 "target": "spare", 00:16:17.040 "progress": { 00:16:17.040 "blocks": 28672, 00:16:17.040 "percent": 45 00:16:17.040 } 00:16:17.040 }, 00:16:17.040 "base_bdevs_list": [ 00:16:17.040 { 00:16:17.040 "name": "spare", 00:16:17.040 "uuid": "d18e48a2-37b8-5a0f-bf9a-e68b55ea2fbf", 00:16:17.040 "is_configured": true, 00:16:17.040 "data_offset": 2048, 00:16:17.040 "data_size": 63488 00:16:17.040 }, 00:16:17.040 { 00:16:17.040 "name": "BaseBdev2", 00:16:17.040 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:17.040 "is_configured": true, 00:16:17.040 "data_offset": 2048, 00:16:17.040 "data_size": 63488 00:16:17.040 } 00:16:17.040 ] 00:16:17.040 }' 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.040 15:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.556 128.00 IOPS, 384.00 MiB/s [2024-12-06T15:43:00.852Z] [2024-12-06 15:43:00.776947] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:17.814 [2024-12-06 15:43:01.088205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:18.073 [2024-12-06 15:43:01.196151] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:18.073 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.073 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.073 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.073 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.073 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.073 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.073 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.073 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.073 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.073 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.073 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.073 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.073 "name": "raid_bdev1", 00:16:18.073 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:18.073 "strip_size_kb": 0, 00:16:18.073 "state": "online", 00:16:18.073 "raid_level": "raid1", 00:16:18.073 "superblock": true, 00:16:18.073 "num_base_bdevs": 2, 00:16:18.073 "num_base_bdevs_discovered": 2, 00:16:18.073 "num_base_bdevs_operational": 2, 00:16:18.073 "process": { 00:16:18.074 "type": "rebuild", 00:16:18.074 "target": "spare", 00:16:18.074 "progress": { 00:16:18.074 "blocks": 47104, 00:16:18.074 "percent": 74 00:16:18.074 } 00:16:18.074 }, 00:16:18.074 "base_bdevs_list": [ 00:16:18.074 { 00:16:18.074 "name": "spare", 00:16:18.074 "uuid": "d18e48a2-37b8-5a0f-bf9a-e68b55ea2fbf", 00:16:18.074 "is_configured": true, 00:16:18.074 "data_offset": 2048, 00:16:18.074 "data_size": 63488 00:16:18.074 }, 00:16:18.074 { 00:16:18.074 "name": "BaseBdev2", 00:16:18.074 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:18.074 "is_configured": true, 00:16:18.074 "data_offset": 2048, 00:16:18.074 "data_size": 63488 00:16:18.074 } 00:16:18.074 ] 00:16:18.074 }' 00:16:18.074 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.332 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.332 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.332 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.332 15:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.332 [2024-12-06 15:43:01.529150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:18.899 114.83 IOPS, 344.50 MiB/s [2024-12-06T15:43:02.194Z] [2024-12-06 15:43:02.189478] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:19.157 [2024-12-06 15:43:02.289287] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:19.157 [2024-12-06 15:43:02.291796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.157 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.157 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.157 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.157 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.157 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.157 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.157 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.157 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.157 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.157 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.416 "name": "raid_bdev1", 00:16:19.416 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:19.416 "strip_size_kb": 0, 00:16:19.416 "state": "online", 00:16:19.416 "raid_level": "raid1", 00:16:19.416 "superblock": true, 00:16:19.416 "num_base_bdevs": 2, 00:16:19.416 "num_base_bdevs_discovered": 2, 00:16:19.416 "num_base_bdevs_operational": 2, 00:16:19.416 "base_bdevs_list": [ 00:16:19.416 { 00:16:19.416 "name": "spare", 00:16:19.416 "uuid": "d18e48a2-37b8-5a0f-bf9a-e68b55ea2fbf", 00:16:19.416 "is_configured": true, 00:16:19.416 "data_offset": 2048, 00:16:19.416 "data_size": 63488 00:16:19.416 }, 00:16:19.416 { 00:16:19.416 "name": "BaseBdev2", 00:16:19.416 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:19.416 "is_configured": true, 00:16:19.416 "data_offset": 2048, 00:16:19.416 "data_size": 63488 00:16:19.416 } 00:16:19.416 ] 00:16:19.416 }' 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.416 105.14 IOPS, 315.43 MiB/s [2024-12-06T15:43:02.711Z] 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.416 "name": "raid_bdev1", 00:16:19.416 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:19.416 "strip_size_kb": 0, 00:16:19.416 "state": "online", 00:16:19.416 "raid_level": "raid1", 00:16:19.416 "superblock": true, 00:16:19.416 "num_base_bdevs": 2, 00:16:19.416 "num_base_bdevs_discovered": 2, 00:16:19.416 "num_base_bdevs_operational": 2, 00:16:19.416 "base_bdevs_list": [ 00:16:19.416 { 00:16:19.416 "name": "spare", 00:16:19.416 "uuid": "d18e48a2-37b8-5a0f-bf9a-e68b55ea2fbf", 00:16:19.416 "is_configured": true, 00:16:19.416 "data_offset": 2048, 00:16:19.416 "data_size": 63488 00:16:19.416 }, 00:16:19.416 { 00:16:19.416 "name": "BaseBdev2", 00:16:19.416 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:19.416 "is_configured": true, 00:16:19.416 "data_offset": 2048, 00:16:19.416 "data_size": 63488 00:16:19.416 } 00:16:19.416 ] 00:16:19.416 }' 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.416 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.674 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.674 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.674 "name": "raid_bdev1", 00:16:19.674 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:19.674 "strip_size_kb": 0, 00:16:19.674 "state": "online", 00:16:19.674 "raid_level": "raid1", 00:16:19.674 "superblock": true, 00:16:19.674 "num_base_bdevs": 2, 00:16:19.674 "num_base_bdevs_discovered": 2, 00:16:19.674 "num_base_bdevs_operational": 2, 00:16:19.674 "base_bdevs_list": [ 00:16:19.674 { 00:16:19.674 "name": "spare", 00:16:19.674 "uuid": "d18e48a2-37b8-5a0f-bf9a-e68b55ea2fbf", 00:16:19.674 "is_configured": true, 00:16:19.674 "data_offset": 2048, 00:16:19.674 "data_size": 63488 00:16:19.674 }, 00:16:19.674 { 00:16:19.674 "name": "BaseBdev2", 00:16:19.674 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:19.674 "is_configured": true, 00:16:19.674 "data_offset": 2048, 00:16:19.674 "data_size": 63488 00:16:19.674 } 00:16:19.674 ] 00:16:19.674 }' 00:16:19.674 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.674 15:43:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.932 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:19.932 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.932 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.932 [2024-12-06 15:43:03.117009] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.932 [2024-12-06 15:43:03.117049] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.932 00:16:19.932 Latency(us) 00:16:19.932 [2024-12-06T15:43:03.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.932 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:19.932 raid_bdev1 : 7.68 99.61 298.82 0.00 0.00 13486.54 292.81 116227.70 00:16:19.932 [2024-12-06T15:43:03.227Z] =================================================================================================================== 00:16:19.932 [2024-12-06T15:43:03.227Z] Total : 99.61 298.82 0.00 0.00 13486.54 292.81 116227.70 00:16:19.932 [2024-12-06 15:43:03.222952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.932 [2024-12-06 15:43:03.223146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.932 [2024-12-06 15:43:03.223267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.932 [2024-12-06 15:43:03.223381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:19.932 { 00:16:19.932 "results": [ 00:16:19.932 { 00:16:19.932 "job": "raid_bdev1", 00:16:19.932 "core_mask": "0x1", 00:16:19.932 "workload": "randrw", 00:16:19.932 "percentage": 50, 00:16:19.932 "status": "finished", 00:16:19.932 "queue_depth": 2, 00:16:19.932 "io_size": 3145728, 00:16:19.932 "runtime": 7.680094, 00:16:19.932 "iops": 99.60815583767595, 00:16:19.932 "mibps": 298.82446751302786, 00:16:19.932 "io_failed": 0, 00:16:19.932 "io_timeout": 0, 00:16:19.932 "avg_latency_us": 13486.535594928733, 00:16:19.932 "min_latency_us": 292.8064257028112, 00:16:19.932 "max_latency_us": 116227.70120481927 00:16:19.932 } 00:16:19.932 ], 00:16:19.933 "core_count": 1 00:16:19.933 } 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.191 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:20.449 /dev/nbd0 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.449 1+0 records in 00:16:20.449 1+0 records out 00:16:20.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278511 s, 14.7 MB/s 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.449 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:20.707 /dev/nbd1 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.707 1+0 records in 00:16:20.707 1+0 records out 00:16:20.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380652 s, 10.8 MB/s 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.707 15:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.966 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.225 [2024-12-06 15:43:04.462711] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:21.225 [2024-12-06 15:43:04.462783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.225 [2024-12-06 15:43:04.462812] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:21.225 [2024-12-06 15:43:04.462828] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.225 [2024-12-06 15:43:04.465742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.225 [2024-12-06 15:43:04.465788] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:21.225 [2024-12-06 15:43:04.465894] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:21.225 [2024-12-06 15:43:04.465958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.225 [2024-12-06 15:43:04.466139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.225 spare 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.225 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.485 [2024-12-06 15:43:04.566087] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:21.485 [2024-12-06 15:43:04.566124] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:21.485 [2024-12-06 15:43:04.566454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:16:21.485 [2024-12-06 15:43:04.566679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:21.485 [2024-12-06 15:43:04.566698] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:21.485 [2024-12-06 15:43:04.566895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.485 "name": "raid_bdev1", 00:16:21.485 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:21.485 "strip_size_kb": 0, 00:16:21.485 "state": "online", 00:16:21.485 "raid_level": "raid1", 00:16:21.485 "superblock": true, 00:16:21.485 "num_base_bdevs": 2, 00:16:21.485 "num_base_bdevs_discovered": 2, 00:16:21.485 "num_base_bdevs_operational": 2, 00:16:21.485 "base_bdevs_list": [ 00:16:21.485 { 00:16:21.485 "name": "spare", 00:16:21.485 "uuid": "d18e48a2-37b8-5a0f-bf9a-e68b55ea2fbf", 00:16:21.485 "is_configured": true, 00:16:21.485 "data_offset": 2048, 00:16:21.485 "data_size": 63488 00:16:21.485 }, 00:16:21.485 { 00:16:21.485 "name": "BaseBdev2", 00:16:21.485 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:21.485 "is_configured": true, 00:16:21.485 "data_offset": 2048, 00:16:21.485 "data_size": 63488 00:16:21.485 } 00:16:21.485 ] 00:16:21.485 }' 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.485 15:43:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.744 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:21.744 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.744 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:21.744 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:21.744 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.744 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.744 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.744 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.744 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.744 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.002 "name": "raid_bdev1", 00:16:22.002 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:22.002 "strip_size_kb": 0, 00:16:22.002 "state": "online", 00:16:22.002 "raid_level": "raid1", 00:16:22.002 "superblock": true, 00:16:22.002 "num_base_bdevs": 2, 00:16:22.002 "num_base_bdevs_discovered": 2, 00:16:22.002 "num_base_bdevs_operational": 2, 00:16:22.002 "base_bdevs_list": [ 00:16:22.002 { 00:16:22.002 "name": "spare", 00:16:22.002 "uuid": "d18e48a2-37b8-5a0f-bf9a-e68b55ea2fbf", 00:16:22.002 "is_configured": true, 00:16:22.002 "data_offset": 2048, 00:16:22.002 "data_size": 63488 00:16:22.002 }, 00:16:22.002 { 00:16:22.002 "name": "BaseBdev2", 00:16:22.002 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:22.002 "is_configured": true, 00:16:22.002 "data_offset": 2048, 00:16:22.002 "data_size": 63488 00:16:22.002 } 00:16:22.002 ] 00:16:22.002 }' 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.002 [2024-12-06 15:43:05.198666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.002 "name": "raid_bdev1", 00:16:22.002 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:22.002 "strip_size_kb": 0, 00:16:22.002 "state": "online", 00:16:22.002 "raid_level": "raid1", 00:16:22.002 "superblock": true, 00:16:22.002 "num_base_bdevs": 2, 00:16:22.002 "num_base_bdevs_discovered": 1, 00:16:22.002 "num_base_bdevs_operational": 1, 00:16:22.002 "base_bdevs_list": [ 00:16:22.002 { 00:16:22.002 "name": null, 00:16:22.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.002 "is_configured": false, 00:16:22.002 "data_offset": 0, 00:16:22.002 "data_size": 63488 00:16:22.002 }, 00:16:22.002 { 00:16:22.002 "name": "BaseBdev2", 00:16:22.002 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:22.002 "is_configured": true, 00:16:22.002 "data_offset": 2048, 00:16:22.002 "data_size": 63488 00:16:22.002 } 00:16:22.002 ] 00:16:22.002 }' 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.002 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.567 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:22.567 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.567 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.567 [2024-12-06 15:43:05.634683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:22.567 [2024-12-06 15:43:05.635053] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:22.567 [2024-12-06 15:43:05.635184] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:22.567 [2024-12-06 15:43:05.635299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:22.567 [2024-12-06 15:43:05.654464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:16:22.567 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.567 15:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:22.567 [2024-12-06 15:43:05.657041] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.501 "name": "raid_bdev1", 00:16:23.501 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:23.501 "strip_size_kb": 0, 00:16:23.501 "state": "online", 00:16:23.501 "raid_level": "raid1", 00:16:23.501 "superblock": true, 00:16:23.501 "num_base_bdevs": 2, 00:16:23.501 "num_base_bdevs_discovered": 2, 00:16:23.501 "num_base_bdevs_operational": 2, 00:16:23.501 "process": { 00:16:23.501 "type": "rebuild", 00:16:23.501 "target": "spare", 00:16:23.501 "progress": { 00:16:23.501 "blocks": 20480, 00:16:23.501 "percent": 32 00:16:23.501 } 00:16:23.501 }, 00:16:23.501 "base_bdevs_list": [ 00:16:23.501 { 00:16:23.501 "name": "spare", 00:16:23.501 "uuid": "d18e48a2-37b8-5a0f-bf9a-e68b55ea2fbf", 00:16:23.501 "is_configured": true, 00:16:23.501 "data_offset": 2048, 00:16:23.501 "data_size": 63488 00:16:23.501 }, 00:16:23.501 { 00:16:23.501 "name": "BaseBdev2", 00:16:23.501 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:23.501 "is_configured": true, 00:16:23.501 "data_offset": 2048, 00:16:23.501 "data_size": 63488 00:16:23.501 } 00:16:23.501 ] 00:16:23.501 }' 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.501 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.760 [2024-12-06 15:43:06.796717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.760 [2024-12-06 15:43:06.866161] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:23.760 [2024-12-06 15:43:06.866224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.760 [2024-12-06 15:43:06.866264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.760 [2024-12-06 15:43:06.866274] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:23.760 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.760 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:23.760 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.760 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.760 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.761 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.761 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:23.761 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.761 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.761 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.761 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.761 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.761 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.761 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.761 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.761 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.761 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.761 "name": "raid_bdev1", 00:16:23.761 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:23.761 "strip_size_kb": 0, 00:16:23.761 "state": "online", 00:16:23.761 "raid_level": "raid1", 00:16:23.761 "superblock": true, 00:16:23.761 "num_base_bdevs": 2, 00:16:23.761 "num_base_bdevs_discovered": 1, 00:16:23.761 "num_base_bdevs_operational": 1, 00:16:23.761 "base_bdevs_list": [ 00:16:23.761 { 00:16:23.761 "name": null, 00:16:23.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.761 "is_configured": false, 00:16:23.761 "data_offset": 0, 00:16:23.761 "data_size": 63488 00:16:23.761 }, 00:16:23.761 { 00:16:23.761 "name": "BaseBdev2", 00:16:23.761 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:23.761 "is_configured": true, 00:16:23.761 "data_offset": 2048, 00:16:23.761 "data_size": 63488 00:16:23.761 } 00:16:23.761 ] 00:16:23.761 }' 00:16:23.761 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.761 15:43:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.329 15:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:24.329 15:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.329 15:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.329 [2024-12-06 15:43:07.330294] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:24.329 [2024-12-06 15:43:07.330376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.329 [2024-12-06 15:43:07.330408] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:24.329 [2024-12-06 15:43:07.330421] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.329 [2024-12-06 15:43:07.331058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.329 [2024-12-06 15:43:07.331095] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:24.329 [2024-12-06 15:43:07.331229] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:24.329 [2024-12-06 15:43:07.331246] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:24.329 [2024-12-06 15:43:07.331264] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:24.329 [2024-12-06 15:43:07.331296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.329 [2024-12-06 15:43:07.349555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:16:24.329 spare 00:16:24.329 15:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.329 15:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:24.329 [2024-12-06 15:43:07.352125] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.264 "name": "raid_bdev1", 00:16:25.264 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:25.264 "strip_size_kb": 0, 00:16:25.264 "state": "online", 00:16:25.264 "raid_level": "raid1", 00:16:25.264 "superblock": true, 00:16:25.264 "num_base_bdevs": 2, 00:16:25.264 "num_base_bdevs_discovered": 2, 00:16:25.264 "num_base_bdevs_operational": 2, 00:16:25.264 "process": { 00:16:25.264 "type": "rebuild", 00:16:25.264 "target": "spare", 00:16:25.264 "progress": { 00:16:25.264 "blocks": 20480, 00:16:25.264 "percent": 32 00:16:25.264 } 00:16:25.264 }, 00:16:25.264 "base_bdevs_list": [ 00:16:25.264 { 00:16:25.264 "name": "spare", 00:16:25.264 "uuid": "d18e48a2-37b8-5a0f-bf9a-e68b55ea2fbf", 00:16:25.264 "is_configured": true, 00:16:25.264 "data_offset": 2048, 00:16:25.264 "data_size": 63488 00:16:25.264 }, 00:16:25.264 { 00:16:25.264 "name": "BaseBdev2", 00:16:25.264 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:25.264 "is_configured": true, 00:16:25.264 "data_offset": 2048, 00:16:25.264 "data_size": 63488 00:16:25.264 } 00:16:25.264 ] 00:16:25.264 }' 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.264 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.264 [2024-12-06 15:43:08.504352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.523 [2024-12-06 15:43:08.560971] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:25.523 [2024-12-06 15:43:08.561051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.523 [2024-12-06 15:43:08.561068] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.523 [2024-12-06 15:43:08.561079] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.523 "name": "raid_bdev1", 00:16:25.523 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:25.523 "strip_size_kb": 0, 00:16:25.523 "state": "online", 00:16:25.523 "raid_level": "raid1", 00:16:25.523 "superblock": true, 00:16:25.523 "num_base_bdevs": 2, 00:16:25.523 "num_base_bdevs_discovered": 1, 00:16:25.523 "num_base_bdevs_operational": 1, 00:16:25.523 "base_bdevs_list": [ 00:16:25.523 { 00:16:25.523 "name": null, 00:16:25.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.523 "is_configured": false, 00:16:25.523 "data_offset": 0, 00:16:25.523 "data_size": 63488 00:16:25.523 }, 00:16:25.523 { 00:16:25.523 "name": "BaseBdev2", 00:16:25.523 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:25.523 "is_configured": true, 00:16:25.523 "data_offset": 2048, 00:16:25.523 "data_size": 63488 00:16:25.523 } 00:16:25.523 ] 00:16:25.523 }' 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.523 15:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.782 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.782 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.782 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.782 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.782 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.782 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.782 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.782 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.782 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.782 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.782 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.782 "name": "raid_bdev1", 00:16:25.782 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:25.782 "strip_size_kb": 0, 00:16:25.782 "state": "online", 00:16:25.782 "raid_level": "raid1", 00:16:25.782 "superblock": true, 00:16:25.782 "num_base_bdevs": 2, 00:16:25.782 "num_base_bdevs_discovered": 1, 00:16:25.782 "num_base_bdevs_operational": 1, 00:16:25.782 "base_bdevs_list": [ 00:16:25.782 { 00:16:25.782 "name": null, 00:16:25.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.782 "is_configured": false, 00:16:25.782 "data_offset": 0, 00:16:25.782 "data_size": 63488 00:16:25.782 }, 00:16:25.782 { 00:16:25.782 "name": "BaseBdev2", 00:16:25.782 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:25.782 "is_configured": true, 00:16:25.782 "data_offset": 2048, 00:16:25.782 "data_size": 63488 00:16:25.782 } 00:16:25.782 ] 00:16:25.782 }' 00:16:25.782 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.041 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:26.041 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.041 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:26.041 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:26.041 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.041 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.041 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.041 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:26.041 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.041 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.041 [2024-12-06 15:43:09.142216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:26.041 [2024-12-06 15:43:09.142280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.041 [2024-12-06 15:43:09.142312] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:26.041 [2024-12-06 15:43:09.142330] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.041 [2024-12-06 15:43:09.142876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.041 [2024-12-06 15:43:09.142903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:26.041 [2024-12-06 15:43:09.142988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:26.041 [2024-12-06 15:43:09.143010] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:26.041 [2024-12-06 15:43:09.143021] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:26.041 [2024-12-06 15:43:09.143039] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:26.041 BaseBdev1 00:16:26.041 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.041 15:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:26.997 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:26.997 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.997 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.997 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.997 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.997 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:26.997 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.997 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.997 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.998 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.998 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.998 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.998 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.998 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.998 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.998 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.998 "name": "raid_bdev1", 00:16:26.998 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:26.998 "strip_size_kb": 0, 00:16:26.998 "state": "online", 00:16:26.998 "raid_level": "raid1", 00:16:26.998 "superblock": true, 00:16:26.998 "num_base_bdevs": 2, 00:16:26.998 "num_base_bdevs_discovered": 1, 00:16:26.998 "num_base_bdevs_operational": 1, 00:16:26.998 "base_bdevs_list": [ 00:16:26.998 { 00:16:26.998 "name": null, 00:16:26.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.998 "is_configured": false, 00:16:26.998 "data_offset": 0, 00:16:26.998 "data_size": 63488 00:16:26.998 }, 00:16:26.998 { 00:16:26.998 "name": "BaseBdev2", 00:16:26.998 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:26.998 "is_configured": true, 00:16:26.998 "data_offset": 2048, 00:16:26.998 "data_size": 63488 00:16:26.998 } 00:16:26.998 ] 00:16:26.998 }' 00:16:26.998 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.998 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.565 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.565 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.565 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.565 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.565 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.565 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.565 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.565 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.565 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.565 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.565 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.565 "name": "raid_bdev1", 00:16:27.565 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:27.565 "strip_size_kb": 0, 00:16:27.565 "state": "online", 00:16:27.565 "raid_level": "raid1", 00:16:27.565 "superblock": true, 00:16:27.565 "num_base_bdevs": 2, 00:16:27.565 "num_base_bdevs_discovered": 1, 00:16:27.565 "num_base_bdevs_operational": 1, 00:16:27.565 "base_bdevs_list": [ 00:16:27.565 { 00:16:27.565 "name": null, 00:16:27.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.566 "is_configured": false, 00:16:27.566 "data_offset": 0, 00:16:27.566 "data_size": 63488 00:16:27.566 }, 00:16:27.566 { 00:16:27.566 "name": "BaseBdev2", 00:16:27.566 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:27.566 "is_configured": true, 00:16:27.566 "data_offset": 2048, 00:16:27.566 "data_size": 63488 00:16:27.566 } 00:16:27.566 ] 00:16:27.566 }' 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.566 [2024-12-06 15:43:10.714316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.566 [2024-12-06 15:43:10.714550] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:27.566 [2024-12-06 15:43:10.714569] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:27.566 request: 00:16:27.566 { 00:16:27.566 "base_bdev": "BaseBdev1", 00:16:27.566 "raid_bdev": "raid_bdev1", 00:16:27.566 "method": "bdev_raid_add_base_bdev", 00:16:27.566 "req_id": 1 00:16:27.566 } 00:16:27.566 Got JSON-RPC error response 00:16:27.566 response: 00:16:27.566 { 00:16:27.566 "code": -22, 00:16:27.566 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:27.566 } 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:27.566 15:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:28.502 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:28.502 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.503 "name": "raid_bdev1", 00:16:28.503 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:28.503 "strip_size_kb": 0, 00:16:28.503 "state": "online", 00:16:28.503 "raid_level": "raid1", 00:16:28.503 "superblock": true, 00:16:28.503 "num_base_bdevs": 2, 00:16:28.503 "num_base_bdevs_discovered": 1, 00:16:28.503 "num_base_bdevs_operational": 1, 00:16:28.503 "base_bdevs_list": [ 00:16:28.503 { 00:16:28.503 "name": null, 00:16:28.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.503 "is_configured": false, 00:16:28.503 "data_offset": 0, 00:16:28.503 "data_size": 63488 00:16:28.503 }, 00:16:28.503 { 00:16:28.503 "name": "BaseBdev2", 00:16:28.503 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:28.503 "is_configured": true, 00:16:28.503 "data_offset": 2048, 00:16:28.503 "data_size": 63488 00:16:28.503 } 00:16:28.503 ] 00:16:28.503 }' 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.503 15:43:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.072 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.072 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.072 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.072 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.072 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.072 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.072 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.072 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.072 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.072 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.072 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.072 "name": "raid_bdev1", 00:16:29.072 "uuid": "d6a9c682-333e-4cf4-9c9e-aed8894bd22f", 00:16:29.072 "strip_size_kb": 0, 00:16:29.072 "state": "online", 00:16:29.072 "raid_level": "raid1", 00:16:29.072 "superblock": true, 00:16:29.072 "num_base_bdevs": 2, 00:16:29.073 "num_base_bdevs_discovered": 1, 00:16:29.073 "num_base_bdevs_operational": 1, 00:16:29.073 "base_bdevs_list": [ 00:16:29.073 { 00:16:29.073 "name": null, 00:16:29.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.073 "is_configured": false, 00:16:29.073 "data_offset": 0, 00:16:29.073 "data_size": 63488 00:16:29.073 }, 00:16:29.073 { 00:16:29.073 "name": "BaseBdev2", 00:16:29.073 "uuid": "d00a6c64-39cb-5cf5-9a2c-909cda1365ca", 00:16:29.073 "is_configured": true, 00:16:29.073 "data_offset": 2048, 00:16:29.073 "data_size": 63488 00:16:29.073 } 00:16:29.073 ] 00:16:29.073 }' 00:16:29.073 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.073 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.073 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.073 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.073 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76893 00:16:29.073 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76893 ']' 00:16:29.073 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76893 00:16:29.073 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:29.073 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.073 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76893 00:16:29.073 killing process with pid 76893 00:16:29.073 Received shutdown signal, test time was about 16.786992 seconds 00:16:29.073 00:16:29.073 Latency(us) 00:16:29.073 [2024-12-06T15:43:12.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.073 [2024-12-06T15:43:12.368Z] =================================================================================================================== 00:16:29.073 [2024-12-06T15:43:12.368Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:29.073 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.073 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.073 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76893' 00:16:29.073 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76893 00:16:29.073 [2024-12-06 15:43:12.292926] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:29.073 15:43:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76893 00:16:29.073 [2024-12-06 15:43:12.293082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.073 [2024-12-06 15:43:12.293149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.073 [2024-12-06 15:43:12.293160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:29.332 [2024-12-06 15:43:12.546688] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:30.710 00:16:30.710 real 0m20.097s 00:16:30.710 user 0m25.707s 00:16:30.710 sys 0m2.621s 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.710 ************************************ 00:16:30.710 END TEST raid_rebuild_test_sb_io 00:16:30.710 ************************************ 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.710 15:43:13 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:16:30.710 15:43:13 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:16:30.710 15:43:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:30.710 15:43:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.710 15:43:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:30.710 ************************************ 00:16:30.710 START TEST raid_rebuild_test 00:16:30.710 ************************************ 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77576 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77576 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77576 ']' 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.710 15:43:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.711 15:43:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.711 15:43:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.969 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:30.969 Zero copy mechanism will not be used. 00:16:30.969 [2024-12-06 15:43:14.030347] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:16:30.969 [2024-12-06 15:43:14.030492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77576 ] 00:16:30.969 [2024-12-06 15:43:14.216012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.228 [2024-12-06 15:43:14.350836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.487 [2024-12-06 15:43:14.585790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.487 [2024-12-06 15:43:14.585845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.747 BaseBdev1_malloc 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.747 [2024-12-06 15:43:14.908287] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:31.747 [2024-12-06 15:43:14.908365] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.747 [2024-12-06 15:43:14.908391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:31.747 [2024-12-06 15:43:14.908408] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.747 [2024-12-06 15:43:14.911143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.747 [2024-12-06 15:43:14.911190] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:31.747 BaseBdev1 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.747 BaseBdev2_malloc 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.747 [2024-12-06 15:43:14.971717] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:31.747 [2024-12-06 15:43:14.971913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.747 [2024-12-06 15:43:14.971952] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:31.747 [2024-12-06 15:43:14.971970] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.747 [2024-12-06 15:43:14.974822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.747 [2024-12-06 15:43:14.974866] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:31.747 BaseBdev2 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.747 15:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.008 BaseBdev3_malloc 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.008 [2024-12-06 15:43:15.047151] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:32.008 [2024-12-06 15:43:15.047211] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.008 [2024-12-06 15:43:15.047236] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:32.008 [2024-12-06 15:43:15.047252] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.008 [2024-12-06 15:43:15.049924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.008 [2024-12-06 15:43:15.049968] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:32.008 BaseBdev3 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.008 BaseBdev4_malloc 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.008 [2024-12-06 15:43:15.110626] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:32.008 [2024-12-06 15:43:15.110689] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.008 [2024-12-06 15:43:15.110712] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:32.008 [2024-12-06 15:43:15.110729] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.008 [2024-12-06 15:43:15.113335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.008 [2024-12-06 15:43:15.113493] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:32.008 BaseBdev4 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.008 spare_malloc 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.008 spare_delay 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.008 [2024-12-06 15:43:15.185392] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:32.008 [2024-12-06 15:43:15.185447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.008 [2024-12-06 15:43:15.185467] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:32.008 [2024-12-06 15:43:15.185482] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.008 [2024-12-06 15:43:15.188136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.008 [2024-12-06 15:43:15.188178] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:32.008 spare 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.008 [2024-12-06 15:43:15.197431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.008 [2024-12-06 15:43:15.199917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.008 [2024-12-06 15:43:15.199979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:32.008 [2024-12-06 15:43:15.200031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:32.008 [2024-12-06 15:43:15.200113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:32.008 [2024-12-06 15:43:15.200128] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:32.008 [2024-12-06 15:43:15.200399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:32.008 [2024-12-06 15:43:15.200588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:32.008 [2024-12-06 15:43:15.200604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:32.008 [2024-12-06 15:43:15.200748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.008 "name": "raid_bdev1", 00:16:32.008 "uuid": "8a47c159-d264-4c75-9a01-4df12623ab65", 00:16:32.008 "strip_size_kb": 0, 00:16:32.008 "state": "online", 00:16:32.008 "raid_level": "raid1", 00:16:32.008 "superblock": false, 00:16:32.008 "num_base_bdevs": 4, 00:16:32.008 "num_base_bdevs_discovered": 4, 00:16:32.008 "num_base_bdevs_operational": 4, 00:16:32.008 "base_bdevs_list": [ 00:16:32.008 { 00:16:32.008 "name": "BaseBdev1", 00:16:32.008 "uuid": "164fb3bb-6e9d-5f3d-b246-bd66c1721366", 00:16:32.008 "is_configured": true, 00:16:32.008 "data_offset": 0, 00:16:32.008 "data_size": 65536 00:16:32.008 }, 00:16:32.008 { 00:16:32.008 "name": "BaseBdev2", 00:16:32.008 "uuid": "501fe657-e929-5954-9c5f-7eba34fbdf49", 00:16:32.008 "is_configured": true, 00:16:32.008 "data_offset": 0, 00:16:32.008 "data_size": 65536 00:16:32.008 }, 00:16:32.008 { 00:16:32.008 "name": "BaseBdev3", 00:16:32.008 "uuid": "18a2d79d-e2a1-51c1-b7b3-903b5ca6bee2", 00:16:32.008 "is_configured": true, 00:16:32.008 "data_offset": 0, 00:16:32.008 "data_size": 65536 00:16:32.008 }, 00:16:32.008 { 00:16:32.008 "name": "BaseBdev4", 00:16:32.008 "uuid": "348048f7-0e5c-5090-919c-a03a6ba3d26b", 00:16:32.008 "is_configured": true, 00:16:32.008 "data_offset": 0, 00:16:32.008 "data_size": 65536 00:16:32.008 } 00:16:32.008 ] 00:16:32.008 }' 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.008 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.575 [2024-12-06 15:43:15.657137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:32.575 15:43:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:32.833 [2024-12-06 15:43:15.940552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:32.833 /dev/nbd0 00:16:32.833 15:43:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:32.833 15:43:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:32.833 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:32.833 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:32.833 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:32.833 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:32.833 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:32.833 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:32.833 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:32.833 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:32.833 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:32.833 1+0 records in 00:16:32.833 1+0 records out 00:16:32.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408794 s, 10.0 MB/s 00:16:32.833 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.833 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:32.833 15:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.833 15:43:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:32.833 15:43:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:32.833 15:43:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:32.833 15:43:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:32.833 15:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:32.833 15:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:32.833 15:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:39.435 65536+0 records in 00:16:39.435 65536+0 records out 00:16:39.435 33554432 bytes (34 MB, 32 MiB) copied, 5.85116 s, 5.7 MB/s 00:16:39.435 15:43:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:39.436 15:43:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:39.436 15:43:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:39.436 15:43:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:39.436 15:43:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:39.436 15:43:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:39.436 15:43:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:39.436 [2024-12-06 15:43:22.064177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.436 [2024-12-06 15:43:22.097429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.436 "name": "raid_bdev1", 00:16:39.436 "uuid": "8a47c159-d264-4c75-9a01-4df12623ab65", 00:16:39.436 "strip_size_kb": 0, 00:16:39.436 "state": "online", 00:16:39.436 "raid_level": "raid1", 00:16:39.436 "superblock": false, 00:16:39.436 "num_base_bdevs": 4, 00:16:39.436 "num_base_bdevs_discovered": 3, 00:16:39.436 "num_base_bdevs_operational": 3, 00:16:39.436 "base_bdevs_list": [ 00:16:39.436 { 00:16:39.436 "name": null, 00:16:39.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.436 "is_configured": false, 00:16:39.436 "data_offset": 0, 00:16:39.436 "data_size": 65536 00:16:39.436 }, 00:16:39.436 { 00:16:39.436 "name": "BaseBdev2", 00:16:39.436 "uuid": "501fe657-e929-5954-9c5f-7eba34fbdf49", 00:16:39.436 "is_configured": true, 00:16:39.436 "data_offset": 0, 00:16:39.436 "data_size": 65536 00:16:39.436 }, 00:16:39.436 { 00:16:39.436 "name": "BaseBdev3", 00:16:39.436 "uuid": "18a2d79d-e2a1-51c1-b7b3-903b5ca6bee2", 00:16:39.436 "is_configured": true, 00:16:39.436 "data_offset": 0, 00:16:39.436 "data_size": 65536 00:16:39.436 }, 00:16:39.436 { 00:16:39.436 "name": "BaseBdev4", 00:16:39.436 "uuid": "348048f7-0e5c-5090-919c-a03a6ba3d26b", 00:16:39.436 "is_configured": true, 00:16:39.436 "data_offset": 0, 00:16:39.436 "data_size": 65536 00:16:39.436 } 00:16:39.436 ] 00:16:39.436 }' 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.436 [2024-12-06 15:43:22.500860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.436 [2024-12-06 15:43:22.517406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.436 15:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:39.436 [2024-12-06 15:43:22.519893] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:40.370 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.370 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.370 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.370 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.370 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.370 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.370 15:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.370 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.370 15:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.370 15:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.370 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.370 "name": "raid_bdev1", 00:16:40.370 "uuid": "8a47c159-d264-4c75-9a01-4df12623ab65", 00:16:40.370 "strip_size_kb": 0, 00:16:40.370 "state": "online", 00:16:40.370 "raid_level": "raid1", 00:16:40.370 "superblock": false, 00:16:40.370 "num_base_bdevs": 4, 00:16:40.370 "num_base_bdevs_discovered": 4, 00:16:40.370 "num_base_bdevs_operational": 4, 00:16:40.370 "process": { 00:16:40.370 "type": "rebuild", 00:16:40.370 "target": "spare", 00:16:40.370 "progress": { 00:16:40.370 "blocks": 20480, 00:16:40.370 "percent": 31 00:16:40.370 } 00:16:40.370 }, 00:16:40.370 "base_bdevs_list": [ 00:16:40.370 { 00:16:40.370 "name": "spare", 00:16:40.370 "uuid": "acc9eae6-eb5c-5841-807d-e59b7992f2b1", 00:16:40.370 "is_configured": true, 00:16:40.370 "data_offset": 0, 00:16:40.370 "data_size": 65536 00:16:40.370 }, 00:16:40.370 { 00:16:40.370 "name": "BaseBdev2", 00:16:40.370 "uuid": "501fe657-e929-5954-9c5f-7eba34fbdf49", 00:16:40.370 "is_configured": true, 00:16:40.370 "data_offset": 0, 00:16:40.370 "data_size": 65536 00:16:40.370 }, 00:16:40.370 { 00:16:40.370 "name": "BaseBdev3", 00:16:40.370 "uuid": "18a2d79d-e2a1-51c1-b7b3-903b5ca6bee2", 00:16:40.370 "is_configured": true, 00:16:40.370 "data_offset": 0, 00:16:40.370 "data_size": 65536 00:16:40.370 }, 00:16:40.370 { 00:16:40.370 "name": "BaseBdev4", 00:16:40.370 "uuid": "348048f7-0e5c-5090-919c-a03a6ba3d26b", 00:16:40.370 "is_configured": true, 00:16:40.370 "data_offset": 0, 00:16:40.370 "data_size": 65536 00:16:40.370 } 00:16:40.370 ] 00:16:40.370 }' 00:16:40.370 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.370 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.370 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.630 [2024-12-06 15:43:23.671102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.630 [2024-12-06 15:43:23.728693] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:40.630 [2024-12-06 15:43:23.728937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.630 [2024-12-06 15:43:23.729034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.630 [2024-12-06 15:43:23.729080] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.630 "name": "raid_bdev1", 00:16:40.630 "uuid": "8a47c159-d264-4c75-9a01-4df12623ab65", 00:16:40.630 "strip_size_kb": 0, 00:16:40.630 "state": "online", 00:16:40.630 "raid_level": "raid1", 00:16:40.630 "superblock": false, 00:16:40.630 "num_base_bdevs": 4, 00:16:40.630 "num_base_bdevs_discovered": 3, 00:16:40.630 "num_base_bdevs_operational": 3, 00:16:40.630 "base_bdevs_list": [ 00:16:40.630 { 00:16:40.630 "name": null, 00:16:40.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.630 "is_configured": false, 00:16:40.630 "data_offset": 0, 00:16:40.630 "data_size": 65536 00:16:40.630 }, 00:16:40.630 { 00:16:40.630 "name": "BaseBdev2", 00:16:40.630 "uuid": "501fe657-e929-5954-9c5f-7eba34fbdf49", 00:16:40.630 "is_configured": true, 00:16:40.630 "data_offset": 0, 00:16:40.630 "data_size": 65536 00:16:40.630 }, 00:16:40.630 { 00:16:40.630 "name": "BaseBdev3", 00:16:40.630 "uuid": "18a2d79d-e2a1-51c1-b7b3-903b5ca6bee2", 00:16:40.630 "is_configured": true, 00:16:40.630 "data_offset": 0, 00:16:40.630 "data_size": 65536 00:16:40.630 }, 00:16:40.630 { 00:16:40.630 "name": "BaseBdev4", 00:16:40.630 "uuid": "348048f7-0e5c-5090-919c-a03a6ba3d26b", 00:16:40.630 "is_configured": true, 00:16:40.630 "data_offset": 0, 00:16:40.630 "data_size": 65536 00:16:40.630 } 00:16:40.630 ] 00:16:40.630 }' 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.630 15:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.889 15:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.889 15:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.889 15:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.889 15:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.889 15:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.148 15:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.148 15:43:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.148 15:43:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.148 15:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.148 15:43:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.148 15:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.148 "name": "raid_bdev1", 00:16:41.148 "uuid": "8a47c159-d264-4c75-9a01-4df12623ab65", 00:16:41.148 "strip_size_kb": 0, 00:16:41.148 "state": "online", 00:16:41.148 "raid_level": "raid1", 00:16:41.148 "superblock": false, 00:16:41.148 "num_base_bdevs": 4, 00:16:41.148 "num_base_bdevs_discovered": 3, 00:16:41.148 "num_base_bdevs_operational": 3, 00:16:41.148 "base_bdevs_list": [ 00:16:41.148 { 00:16:41.148 "name": null, 00:16:41.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.148 "is_configured": false, 00:16:41.148 "data_offset": 0, 00:16:41.148 "data_size": 65536 00:16:41.148 }, 00:16:41.148 { 00:16:41.148 "name": "BaseBdev2", 00:16:41.148 "uuid": "501fe657-e929-5954-9c5f-7eba34fbdf49", 00:16:41.148 "is_configured": true, 00:16:41.148 "data_offset": 0, 00:16:41.148 "data_size": 65536 00:16:41.148 }, 00:16:41.148 { 00:16:41.148 "name": "BaseBdev3", 00:16:41.148 "uuid": "18a2d79d-e2a1-51c1-b7b3-903b5ca6bee2", 00:16:41.148 "is_configured": true, 00:16:41.148 "data_offset": 0, 00:16:41.148 "data_size": 65536 00:16:41.148 }, 00:16:41.148 { 00:16:41.148 "name": "BaseBdev4", 00:16:41.148 "uuid": "348048f7-0e5c-5090-919c-a03a6ba3d26b", 00:16:41.148 "is_configured": true, 00:16:41.148 "data_offset": 0, 00:16:41.148 "data_size": 65536 00:16:41.148 } 00:16:41.148 ] 00:16:41.148 }' 00:16:41.148 15:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.148 15:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.148 15:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.148 15:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.148 15:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:41.148 15:43:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.148 15:43:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.148 [2024-12-06 15:43:24.308260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.148 [2024-12-06 15:43:24.323542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:16:41.148 15:43:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.148 15:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:41.148 [2024-12-06 15:43:24.326144] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:42.086 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.086 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.086 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.086 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.086 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.086 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.086 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.086 15:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.086 15:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.086 15:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.345 "name": "raid_bdev1", 00:16:42.345 "uuid": "8a47c159-d264-4c75-9a01-4df12623ab65", 00:16:42.345 "strip_size_kb": 0, 00:16:42.345 "state": "online", 00:16:42.345 "raid_level": "raid1", 00:16:42.345 "superblock": false, 00:16:42.345 "num_base_bdevs": 4, 00:16:42.345 "num_base_bdevs_discovered": 4, 00:16:42.345 "num_base_bdevs_operational": 4, 00:16:42.345 "process": { 00:16:42.345 "type": "rebuild", 00:16:42.345 "target": "spare", 00:16:42.345 "progress": { 00:16:42.345 "blocks": 20480, 00:16:42.345 "percent": 31 00:16:42.345 } 00:16:42.345 }, 00:16:42.345 "base_bdevs_list": [ 00:16:42.345 { 00:16:42.345 "name": "spare", 00:16:42.345 "uuid": "acc9eae6-eb5c-5841-807d-e59b7992f2b1", 00:16:42.345 "is_configured": true, 00:16:42.345 "data_offset": 0, 00:16:42.345 "data_size": 65536 00:16:42.345 }, 00:16:42.345 { 00:16:42.345 "name": "BaseBdev2", 00:16:42.345 "uuid": "501fe657-e929-5954-9c5f-7eba34fbdf49", 00:16:42.345 "is_configured": true, 00:16:42.345 "data_offset": 0, 00:16:42.345 "data_size": 65536 00:16:42.345 }, 00:16:42.345 { 00:16:42.345 "name": "BaseBdev3", 00:16:42.345 "uuid": "18a2d79d-e2a1-51c1-b7b3-903b5ca6bee2", 00:16:42.345 "is_configured": true, 00:16:42.345 "data_offset": 0, 00:16:42.345 "data_size": 65536 00:16:42.345 }, 00:16:42.345 { 00:16:42.345 "name": "BaseBdev4", 00:16:42.345 "uuid": "348048f7-0e5c-5090-919c-a03a6ba3d26b", 00:16:42.345 "is_configured": true, 00:16:42.345 "data_offset": 0, 00:16:42.345 "data_size": 65536 00:16:42.345 } 00:16:42.345 ] 00:16:42.345 }' 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.345 [2024-12-06 15:43:25.474235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:42.345 [2024-12-06 15:43:25.535644] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.345 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.345 "name": "raid_bdev1", 00:16:42.345 "uuid": "8a47c159-d264-4c75-9a01-4df12623ab65", 00:16:42.345 "strip_size_kb": 0, 00:16:42.345 "state": "online", 00:16:42.345 "raid_level": "raid1", 00:16:42.345 "superblock": false, 00:16:42.345 "num_base_bdevs": 4, 00:16:42.345 "num_base_bdevs_discovered": 3, 00:16:42.345 "num_base_bdevs_operational": 3, 00:16:42.345 "process": { 00:16:42.345 "type": "rebuild", 00:16:42.345 "target": "spare", 00:16:42.345 "progress": { 00:16:42.345 "blocks": 24576, 00:16:42.345 "percent": 37 00:16:42.345 } 00:16:42.345 }, 00:16:42.345 "base_bdevs_list": [ 00:16:42.345 { 00:16:42.345 "name": "spare", 00:16:42.345 "uuid": "acc9eae6-eb5c-5841-807d-e59b7992f2b1", 00:16:42.345 "is_configured": true, 00:16:42.345 "data_offset": 0, 00:16:42.345 "data_size": 65536 00:16:42.345 }, 00:16:42.345 { 00:16:42.345 "name": null, 00:16:42.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.345 "is_configured": false, 00:16:42.345 "data_offset": 0, 00:16:42.345 "data_size": 65536 00:16:42.345 }, 00:16:42.345 { 00:16:42.345 "name": "BaseBdev3", 00:16:42.345 "uuid": "18a2d79d-e2a1-51c1-b7b3-903b5ca6bee2", 00:16:42.345 "is_configured": true, 00:16:42.345 "data_offset": 0, 00:16:42.345 "data_size": 65536 00:16:42.345 }, 00:16:42.345 { 00:16:42.346 "name": "BaseBdev4", 00:16:42.346 "uuid": "348048f7-0e5c-5090-919c-a03a6ba3d26b", 00:16:42.346 "is_configured": true, 00:16:42.346 "data_offset": 0, 00:16:42.346 "data_size": 65536 00:16:42.346 } 00:16:42.346 ] 00:16:42.346 }' 00:16:42.346 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.346 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.346 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=453 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.604 "name": "raid_bdev1", 00:16:42.604 "uuid": "8a47c159-d264-4c75-9a01-4df12623ab65", 00:16:42.604 "strip_size_kb": 0, 00:16:42.604 "state": "online", 00:16:42.604 "raid_level": "raid1", 00:16:42.604 "superblock": false, 00:16:42.604 "num_base_bdevs": 4, 00:16:42.604 "num_base_bdevs_discovered": 3, 00:16:42.604 "num_base_bdevs_operational": 3, 00:16:42.604 "process": { 00:16:42.604 "type": "rebuild", 00:16:42.604 "target": "spare", 00:16:42.604 "progress": { 00:16:42.604 "blocks": 26624, 00:16:42.604 "percent": 40 00:16:42.604 } 00:16:42.604 }, 00:16:42.604 "base_bdevs_list": [ 00:16:42.604 { 00:16:42.604 "name": "spare", 00:16:42.604 "uuid": "acc9eae6-eb5c-5841-807d-e59b7992f2b1", 00:16:42.604 "is_configured": true, 00:16:42.604 "data_offset": 0, 00:16:42.604 "data_size": 65536 00:16:42.604 }, 00:16:42.604 { 00:16:42.604 "name": null, 00:16:42.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.604 "is_configured": false, 00:16:42.604 "data_offset": 0, 00:16:42.604 "data_size": 65536 00:16:42.604 }, 00:16:42.604 { 00:16:42.604 "name": "BaseBdev3", 00:16:42.604 "uuid": "18a2d79d-e2a1-51c1-b7b3-903b5ca6bee2", 00:16:42.604 "is_configured": true, 00:16:42.604 "data_offset": 0, 00:16:42.604 "data_size": 65536 00:16:42.604 }, 00:16:42.604 { 00:16:42.604 "name": "BaseBdev4", 00:16:42.604 "uuid": "348048f7-0e5c-5090-919c-a03a6ba3d26b", 00:16:42.604 "is_configured": true, 00:16:42.604 "data_offset": 0, 00:16:42.604 "data_size": 65536 00:16:42.604 } 00:16:42.604 ] 00:16:42.604 }' 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.604 15:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:43.539 15:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:43.539 15:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.539 15:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.539 15:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.539 15:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.539 15:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.539 15:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.539 15:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.539 15:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.539 15:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.539 15:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.798 15:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.798 "name": "raid_bdev1", 00:16:43.798 "uuid": "8a47c159-d264-4c75-9a01-4df12623ab65", 00:16:43.798 "strip_size_kb": 0, 00:16:43.798 "state": "online", 00:16:43.798 "raid_level": "raid1", 00:16:43.798 "superblock": false, 00:16:43.799 "num_base_bdevs": 4, 00:16:43.799 "num_base_bdevs_discovered": 3, 00:16:43.799 "num_base_bdevs_operational": 3, 00:16:43.799 "process": { 00:16:43.799 "type": "rebuild", 00:16:43.799 "target": "spare", 00:16:43.799 "progress": { 00:16:43.799 "blocks": 49152, 00:16:43.799 "percent": 75 00:16:43.799 } 00:16:43.799 }, 00:16:43.799 "base_bdevs_list": [ 00:16:43.799 { 00:16:43.799 "name": "spare", 00:16:43.799 "uuid": "acc9eae6-eb5c-5841-807d-e59b7992f2b1", 00:16:43.799 "is_configured": true, 00:16:43.799 "data_offset": 0, 00:16:43.799 "data_size": 65536 00:16:43.799 }, 00:16:43.799 { 00:16:43.799 "name": null, 00:16:43.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.799 "is_configured": false, 00:16:43.799 "data_offset": 0, 00:16:43.799 "data_size": 65536 00:16:43.799 }, 00:16:43.799 { 00:16:43.799 "name": "BaseBdev3", 00:16:43.799 "uuid": "18a2d79d-e2a1-51c1-b7b3-903b5ca6bee2", 00:16:43.799 "is_configured": true, 00:16:43.799 "data_offset": 0, 00:16:43.799 "data_size": 65536 00:16:43.799 }, 00:16:43.799 { 00:16:43.799 "name": "BaseBdev4", 00:16:43.799 "uuid": "348048f7-0e5c-5090-919c-a03a6ba3d26b", 00:16:43.799 "is_configured": true, 00:16:43.799 "data_offset": 0, 00:16:43.799 "data_size": 65536 00:16:43.799 } 00:16:43.799 ] 00:16:43.799 }' 00:16:43.799 15:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.799 15:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.799 15:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.799 15:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.799 15:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:44.368 [2024-12-06 15:43:27.549810] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:44.368 [2024-12-06 15:43:27.549889] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:44.368 [2024-12-06 15:43:27.549954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.938 15:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:44.938 15:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.938 15:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.938 15:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.938 15:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.938 15:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.938 15:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.938 15:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.938 15:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.938 15:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.938 15:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.938 15:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.938 "name": "raid_bdev1", 00:16:44.938 "uuid": "8a47c159-d264-4c75-9a01-4df12623ab65", 00:16:44.938 "strip_size_kb": 0, 00:16:44.938 "state": "online", 00:16:44.938 "raid_level": "raid1", 00:16:44.938 "superblock": false, 00:16:44.938 "num_base_bdevs": 4, 00:16:44.938 "num_base_bdevs_discovered": 3, 00:16:44.938 "num_base_bdevs_operational": 3, 00:16:44.938 "base_bdevs_list": [ 00:16:44.938 { 00:16:44.938 "name": "spare", 00:16:44.938 "uuid": "acc9eae6-eb5c-5841-807d-e59b7992f2b1", 00:16:44.938 "is_configured": true, 00:16:44.938 "data_offset": 0, 00:16:44.938 "data_size": 65536 00:16:44.938 }, 00:16:44.938 { 00:16:44.938 "name": null, 00:16:44.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.938 "is_configured": false, 00:16:44.938 "data_offset": 0, 00:16:44.938 "data_size": 65536 00:16:44.938 }, 00:16:44.938 { 00:16:44.938 "name": "BaseBdev3", 00:16:44.938 "uuid": "18a2d79d-e2a1-51c1-b7b3-903b5ca6bee2", 00:16:44.938 "is_configured": true, 00:16:44.938 "data_offset": 0, 00:16:44.938 "data_size": 65536 00:16:44.938 }, 00:16:44.938 { 00:16:44.938 "name": "BaseBdev4", 00:16:44.938 "uuid": "348048f7-0e5c-5090-919c-a03a6ba3d26b", 00:16:44.938 "is_configured": true, 00:16:44.938 "data_offset": 0, 00:16:44.938 "data_size": 65536 00:16:44.938 } 00:16:44.938 ] 00:16:44.938 }' 00:16:44.938 15:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.938 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:44.938 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.938 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:44.938 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:44.938 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:44.938 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.938 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:44.938 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:44.938 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.938 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.938 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.938 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.938 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.938 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.938 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.938 "name": "raid_bdev1", 00:16:44.938 "uuid": "8a47c159-d264-4c75-9a01-4df12623ab65", 00:16:44.938 "strip_size_kb": 0, 00:16:44.938 "state": "online", 00:16:44.938 "raid_level": "raid1", 00:16:44.938 "superblock": false, 00:16:44.938 "num_base_bdevs": 4, 00:16:44.938 "num_base_bdevs_discovered": 3, 00:16:44.938 "num_base_bdevs_operational": 3, 00:16:44.938 "base_bdevs_list": [ 00:16:44.938 { 00:16:44.938 "name": "spare", 00:16:44.938 "uuid": "acc9eae6-eb5c-5841-807d-e59b7992f2b1", 00:16:44.938 "is_configured": true, 00:16:44.938 "data_offset": 0, 00:16:44.938 "data_size": 65536 00:16:44.939 }, 00:16:44.939 { 00:16:44.939 "name": null, 00:16:44.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.939 "is_configured": false, 00:16:44.939 "data_offset": 0, 00:16:44.939 "data_size": 65536 00:16:44.939 }, 00:16:44.939 { 00:16:44.939 "name": "BaseBdev3", 00:16:44.939 "uuid": "18a2d79d-e2a1-51c1-b7b3-903b5ca6bee2", 00:16:44.939 "is_configured": true, 00:16:44.939 "data_offset": 0, 00:16:44.939 "data_size": 65536 00:16:44.939 }, 00:16:44.939 { 00:16:44.939 "name": "BaseBdev4", 00:16:44.939 "uuid": "348048f7-0e5c-5090-919c-a03a6ba3d26b", 00:16:44.939 "is_configured": true, 00:16:44.939 "data_offset": 0, 00:16:44.939 "data_size": 65536 00:16:44.939 } 00:16:44.939 ] 00:16:44.939 }' 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.939 "name": "raid_bdev1", 00:16:44.939 "uuid": "8a47c159-d264-4c75-9a01-4df12623ab65", 00:16:44.939 "strip_size_kb": 0, 00:16:44.939 "state": "online", 00:16:44.939 "raid_level": "raid1", 00:16:44.939 "superblock": false, 00:16:44.939 "num_base_bdevs": 4, 00:16:44.939 "num_base_bdevs_discovered": 3, 00:16:44.939 "num_base_bdevs_operational": 3, 00:16:44.939 "base_bdevs_list": [ 00:16:44.939 { 00:16:44.939 "name": "spare", 00:16:44.939 "uuid": "acc9eae6-eb5c-5841-807d-e59b7992f2b1", 00:16:44.939 "is_configured": true, 00:16:44.939 "data_offset": 0, 00:16:44.939 "data_size": 65536 00:16:44.939 }, 00:16:44.939 { 00:16:44.939 "name": null, 00:16:44.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.939 "is_configured": false, 00:16:44.939 "data_offset": 0, 00:16:44.939 "data_size": 65536 00:16:44.939 }, 00:16:44.939 { 00:16:44.939 "name": "BaseBdev3", 00:16:44.939 "uuid": "18a2d79d-e2a1-51c1-b7b3-903b5ca6bee2", 00:16:44.939 "is_configured": true, 00:16:44.939 "data_offset": 0, 00:16:44.939 "data_size": 65536 00:16:44.939 }, 00:16:44.939 { 00:16:44.939 "name": "BaseBdev4", 00:16:44.939 "uuid": "348048f7-0e5c-5090-919c-a03a6ba3d26b", 00:16:44.939 "is_configured": true, 00:16:44.939 "data_offset": 0, 00:16:44.939 "data_size": 65536 00:16:44.939 } 00:16:44.939 ] 00:16:44.939 }' 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.939 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.509 [2024-12-06 15:43:28.588629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.509 [2024-12-06 15:43:28.588787] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.509 [2024-12-06 15:43:28.589009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.509 [2024-12-06 15:43:28.589119] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.509 [2024-12-06 15:43:28.589133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:45.509 15:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:45.769 /dev/nbd0 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:45.769 1+0 records in 00:16:45.769 1+0 records out 00:16:45.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221046 s, 18.5 MB/s 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:45.769 15:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:46.029 /dev/nbd1 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:46.029 1+0 records in 00:16:46.029 1+0 records out 00:16:46.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459619 s, 8.9 MB/s 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:46.029 15:43:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:46.288 15:43:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:46.288 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.288 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:46.288 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:46.288 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:46.288 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.288 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.548 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:46.811 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:46.811 15:43:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.811 15:43:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:46.811 15:43:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77576 00:16:46.811 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77576 ']' 00:16:46.811 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77576 00:16:46.811 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:46.811 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.811 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77576 00:16:46.811 killing process with pid 77576 00:16:46.811 Received shutdown signal, test time was about 60.000000 seconds 00:16:46.811 00:16:46.811 Latency(us) 00:16:46.811 [2024-12-06T15:43:30.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.811 [2024-12-06T15:43:30.106Z] =================================================================================================================== 00:16:46.811 [2024-12-06T15:43:30.106Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:46.811 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.811 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.811 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77576' 00:16:46.811 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77576 00:16:46.811 [2024-12-06 15:43:29.895518] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.811 15:43:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77576 00:16:47.497 [2024-12-06 15:43:30.433194] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.432 15:43:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:48.433 00:16:48.433 real 0m17.758s 00:16:48.433 user 0m19.305s 00:16:48.433 sys 0m3.667s 00:16:48.433 15:43:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.433 ************************************ 00:16:48.433 END TEST raid_rebuild_test 00:16:48.433 ************************************ 00:16:48.433 15:43:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.691 15:43:31 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:16:48.691 15:43:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:48.691 15:43:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.691 15:43:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.691 ************************************ 00:16:48.691 START TEST raid_rebuild_test_sb 00:16:48.691 ************************************ 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78018 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78018 00:16:48.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78018 ']' 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.691 15:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.691 [2024-12-06 15:43:31.869160] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:16:48.691 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:48.691 Zero copy mechanism will not be used. 00:16:48.691 [2024-12-06 15:43:31.869442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78018 ] 00:16:48.949 [2024-12-06 15:43:32.057723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.949 [2024-12-06 15:43:32.202688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.207 [2024-12-06 15:43:32.449257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.207 [2024-12-06 15:43:32.449539] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.466 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.466 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:49.466 15:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.466 15:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:49.466 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.466 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.726 BaseBdev1_malloc 00:16:49.726 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.726 15:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:49.726 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.726 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.726 [2024-12-06 15:43:32.777967] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:49.726 [2024-12-06 15:43:32.778202] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.726 [2024-12-06 15:43:32.778235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:49.726 [2024-12-06 15:43:32.778251] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.726 [2024-12-06 15:43:32.781037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.726 [2024-12-06 15:43:32.781083] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:49.726 BaseBdev1 00:16:49.726 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.726 15:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.726 15:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:49.726 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.726 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.726 BaseBdev2_malloc 00:16:49.726 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.726 15:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:49.726 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.726 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.726 [2024-12-06 15:43:32.839894] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:49.726 [2024-12-06 15:43:32.839959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.726 [2024-12-06 15:43:32.839986] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:49.726 [2024-12-06 15:43:32.840001] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.726 [2024-12-06 15:43:32.842756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.727 [2024-12-06 15:43:32.842798] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:49.727 BaseBdev2 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.727 BaseBdev3_malloc 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.727 [2024-12-06 15:43:32.916549] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:49.727 [2024-12-06 15:43:32.916730] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.727 [2024-12-06 15:43:32.916761] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:49.727 [2024-12-06 15:43:32.916779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.727 [2024-12-06 15:43:32.919417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.727 [2024-12-06 15:43:32.919462] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:49.727 BaseBdev3 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.727 BaseBdev4_malloc 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.727 [2024-12-06 15:43:32.981559] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:49.727 [2024-12-06 15:43:32.981622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.727 [2024-12-06 15:43:32.981648] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:49.727 [2024-12-06 15:43:32.981664] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.727 [2024-12-06 15:43:32.984318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.727 [2024-12-06 15:43:32.984473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:49.727 BaseBdev4 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.727 15:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.987 spare_malloc 00:16:49.987 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.987 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:49.987 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.987 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.987 spare_delay 00:16:49.987 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.987 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:49.987 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.987 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.987 [2024-12-06 15:43:33.057409] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:49.987 [2024-12-06 15:43:33.057468] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.987 [2024-12-06 15:43:33.057491] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:49.987 [2024-12-06 15:43:33.057520] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.987 [2024-12-06 15:43:33.060183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.987 [2024-12-06 15:43:33.060226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:49.987 spare 00:16:49.987 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.987 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:49.987 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.987 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.987 [2024-12-06 15:43:33.069461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.987 [2024-12-06 15:43:33.071844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.987 [2024-12-06 15:43:33.071909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:49.987 [2024-12-06 15:43:33.071963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:49.987 [2024-12-06 15:43:33.072163] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:49.987 [2024-12-06 15:43:33.072181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:49.987 [2024-12-06 15:43:33.072459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:49.988 [2024-12-06 15:43:33.072679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:49.988 [2024-12-06 15:43:33.072692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:49.988 [2024-12-06 15:43:33.072847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.988 "name": "raid_bdev1", 00:16:49.988 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:16:49.988 "strip_size_kb": 0, 00:16:49.988 "state": "online", 00:16:49.988 "raid_level": "raid1", 00:16:49.988 "superblock": true, 00:16:49.988 "num_base_bdevs": 4, 00:16:49.988 "num_base_bdevs_discovered": 4, 00:16:49.988 "num_base_bdevs_operational": 4, 00:16:49.988 "base_bdevs_list": [ 00:16:49.988 { 00:16:49.988 "name": "BaseBdev1", 00:16:49.988 "uuid": "4c7a2a67-cc74-5598-b82f-2a88f10f1937", 00:16:49.988 "is_configured": true, 00:16:49.988 "data_offset": 2048, 00:16:49.988 "data_size": 63488 00:16:49.988 }, 00:16:49.988 { 00:16:49.988 "name": "BaseBdev2", 00:16:49.988 "uuid": "85e8aef7-cc40-506a-93e9-8e505929dbde", 00:16:49.988 "is_configured": true, 00:16:49.988 "data_offset": 2048, 00:16:49.988 "data_size": 63488 00:16:49.988 }, 00:16:49.988 { 00:16:49.988 "name": "BaseBdev3", 00:16:49.988 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:16:49.988 "is_configured": true, 00:16:49.988 "data_offset": 2048, 00:16:49.988 "data_size": 63488 00:16:49.988 }, 00:16:49.988 { 00:16:49.988 "name": "BaseBdev4", 00:16:49.988 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:16:49.988 "is_configured": true, 00:16:49.988 "data_offset": 2048, 00:16:49.988 "data_size": 63488 00:16:49.988 } 00:16:49.988 ] 00:16:49.988 }' 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.988 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.247 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:50.247 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.247 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.247 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.247 [2024-12-06 15:43:33.481195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.247 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.247 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:50.247 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:50.247 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.247 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.247 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:50.506 [2024-12-06 15:43:33.752556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:50.506 /dev/nbd0 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:50.506 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:50.765 1+0 records in 00:16:50.765 1+0 records out 00:16:50.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034306 s, 11.9 MB/s 00:16:50.766 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.766 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:50.766 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.766 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:50.766 15:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:50.766 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:50.766 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:50.766 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:50.766 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:50.766 15:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:16:56.042 63488+0 records in 00:16:56.042 63488+0 records out 00:16:56.042 32505856 bytes (33 MB, 31 MiB) copied, 5.21061 s, 6.2 MB/s 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:56.042 [2024-12-06 15:43:39.256127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.042 [2024-12-06 15:43:39.284195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.042 15:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.300 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.300 "name": "raid_bdev1", 00:16:56.300 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:16:56.300 "strip_size_kb": 0, 00:16:56.300 "state": "online", 00:16:56.300 "raid_level": "raid1", 00:16:56.300 "superblock": true, 00:16:56.300 "num_base_bdevs": 4, 00:16:56.300 "num_base_bdevs_discovered": 3, 00:16:56.300 "num_base_bdevs_operational": 3, 00:16:56.300 "base_bdevs_list": [ 00:16:56.300 { 00:16:56.300 "name": null, 00:16:56.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.300 "is_configured": false, 00:16:56.300 "data_offset": 0, 00:16:56.300 "data_size": 63488 00:16:56.300 }, 00:16:56.300 { 00:16:56.300 "name": "BaseBdev2", 00:16:56.300 "uuid": "85e8aef7-cc40-506a-93e9-8e505929dbde", 00:16:56.300 "is_configured": true, 00:16:56.300 "data_offset": 2048, 00:16:56.300 "data_size": 63488 00:16:56.300 }, 00:16:56.300 { 00:16:56.300 "name": "BaseBdev3", 00:16:56.300 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:16:56.300 "is_configured": true, 00:16:56.300 "data_offset": 2048, 00:16:56.300 "data_size": 63488 00:16:56.300 }, 00:16:56.300 { 00:16:56.300 "name": "BaseBdev4", 00:16:56.300 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:16:56.300 "is_configured": true, 00:16:56.300 "data_offset": 2048, 00:16:56.300 "data_size": 63488 00:16:56.300 } 00:16:56.300 ] 00:16:56.300 }' 00:16:56.300 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.300 15:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.558 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:56.558 15:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.558 15:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.558 [2024-12-06 15:43:39.735604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:56.558 [2024-12-06 15:43:39.752221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:16:56.558 15:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.558 15:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:56.558 [2024-12-06 15:43:39.754730] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:57.493 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.493 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.493 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.493 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.493 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.493 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.493 15:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.493 15:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.493 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.752 "name": "raid_bdev1", 00:16:57.752 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:16:57.752 "strip_size_kb": 0, 00:16:57.752 "state": "online", 00:16:57.752 "raid_level": "raid1", 00:16:57.752 "superblock": true, 00:16:57.752 "num_base_bdevs": 4, 00:16:57.752 "num_base_bdevs_discovered": 4, 00:16:57.752 "num_base_bdevs_operational": 4, 00:16:57.752 "process": { 00:16:57.752 "type": "rebuild", 00:16:57.752 "target": "spare", 00:16:57.752 "progress": { 00:16:57.752 "blocks": 20480, 00:16:57.752 "percent": 32 00:16:57.752 } 00:16:57.752 }, 00:16:57.752 "base_bdevs_list": [ 00:16:57.752 { 00:16:57.752 "name": "spare", 00:16:57.752 "uuid": "a4c11203-faa4-5098-99df-c1226af51d16", 00:16:57.752 "is_configured": true, 00:16:57.752 "data_offset": 2048, 00:16:57.752 "data_size": 63488 00:16:57.752 }, 00:16:57.752 { 00:16:57.752 "name": "BaseBdev2", 00:16:57.752 "uuid": "85e8aef7-cc40-506a-93e9-8e505929dbde", 00:16:57.752 "is_configured": true, 00:16:57.752 "data_offset": 2048, 00:16:57.752 "data_size": 63488 00:16:57.752 }, 00:16:57.752 { 00:16:57.752 "name": "BaseBdev3", 00:16:57.752 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:16:57.752 "is_configured": true, 00:16:57.752 "data_offset": 2048, 00:16:57.752 "data_size": 63488 00:16:57.752 }, 00:16:57.752 { 00:16:57.752 "name": "BaseBdev4", 00:16:57.752 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:16:57.752 "is_configured": true, 00:16:57.752 "data_offset": 2048, 00:16:57.752 "data_size": 63488 00:16:57.752 } 00:16:57.752 ] 00:16:57.752 }' 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.752 [2024-12-06 15:43:40.889810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.752 [2024-12-06 15:43:40.963328] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:57.752 [2024-12-06 15:43:40.963418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.752 [2024-12-06 15:43:40.963437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.752 [2024-12-06 15:43:40.963450] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.752 15:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.752 15:43:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.752 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.752 "name": "raid_bdev1", 00:16:57.752 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:16:57.752 "strip_size_kb": 0, 00:16:57.752 "state": "online", 00:16:57.752 "raid_level": "raid1", 00:16:57.752 "superblock": true, 00:16:57.752 "num_base_bdevs": 4, 00:16:57.752 "num_base_bdevs_discovered": 3, 00:16:57.752 "num_base_bdevs_operational": 3, 00:16:57.752 "base_bdevs_list": [ 00:16:57.752 { 00:16:57.752 "name": null, 00:16:57.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.752 "is_configured": false, 00:16:57.752 "data_offset": 0, 00:16:57.752 "data_size": 63488 00:16:57.752 }, 00:16:57.752 { 00:16:57.752 "name": "BaseBdev2", 00:16:57.752 "uuid": "85e8aef7-cc40-506a-93e9-8e505929dbde", 00:16:57.752 "is_configured": true, 00:16:57.752 "data_offset": 2048, 00:16:57.752 "data_size": 63488 00:16:57.752 }, 00:16:57.752 { 00:16:57.752 "name": "BaseBdev3", 00:16:57.752 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:16:57.752 "is_configured": true, 00:16:57.752 "data_offset": 2048, 00:16:57.752 "data_size": 63488 00:16:57.752 }, 00:16:57.752 { 00:16:57.752 "name": "BaseBdev4", 00:16:57.752 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:16:57.752 "is_configured": true, 00:16:57.752 "data_offset": 2048, 00:16:57.752 "data_size": 63488 00:16:57.752 } 00:16:57.752 ] 00:16:57.752 }' 00:16:57.752 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.752 15:43:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.318 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:58.318 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.318 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:58.318 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:58.318 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.318 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.318 15:43:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.318 15:43:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.318 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.318 15:43:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.318 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.318 "name": "raid_bdev1", 00:16:58.318 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:16:58.318 "strip_size_kb": 0, 00:16:58.318 "state": "online", 00:16:58.318 "raid_level": "raid1", 00:16:58.318 "superblock": true, 00:16:58.318 "num_base_bdevs": 4, 00:16:58.318 "num_base_bdevs_discovered": 3, 00:16:58.318 "num_base_bdevs_operational": 3, 00:16:58.318 "base_bdevs_list": [ 00:16:58.319 { 00:16:58.319 "name": null, 00:16:58.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.319 "is_configured": false, 00:16:58.319 "data_offset": 0, 00:16:58.319 "data_size": 63488 00:16:58.319 }, 00:16:58.319 { 00:16:58.319 "name": "BaseBdev2", 00:16:58.319 "uuid": "85e8aef7-cc40-506a-93e9-8e505929dbde", 00:16:58.319 "is_configured": true, 00:16:58.319 "data_offset": 2048, 00:16:58.319 "data_size": 63488 00:16:58.319 }, 00:16:58.319 { 00:16:58.319 "name": "BaseBdev3", 00:16:58.319 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:16:58.319 "is_configured": true, 00:16:58.319 "data_offset": 2048, 00:16:58.319 "data_size": 63488 00:16:58.319 }, 00:16:58.319 { 00:16:58.319 "name": "BaseBdev4", 00:16:58.319 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:16:58.319 "is_configured": true, 00:16:58.319 "data_offset": 2048, 00:16:58.319 "data_size": 63488 00:16:58.319 } 00:16:58.319 ] 00:16:58.319 }' 00:16:58.319 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.319 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:58.319 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.319 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:58.319 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:58.319 15:43:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.319 15:43:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.319 [2024-12-06 15:43:41.530390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:58.319 [2024-12-06 15:43:41.544691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:16:58.319 15:43:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.319 15:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:58.319 [2024-12-06 15:43:41.547178] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.693 "name": "raid_bdev1", 00:16:59.693 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:16:59.693 "strip_size_kb": 0, 00:16:59.693 "state": "online", 00:16:59.693 "raid_level": "raid1", 00:16:59.693 "superblock": true, 00:16:59.693 "num_base_bdevs": 4, 00:16:59.693 "num_base_bdevs_discovered": 4, 00:16:59.693 "num_base_bdevs_operational": 4, 00:16:59.693 "process": { 00:16:59.693 "type": "rebuild", 00:16:59.693 "target": "spare", 00:16:59.693 "progress": { 00:16:59.693 "blocks": 20480, 00:16:59.693 "percent": 32 00:16:59.693 } 00:16:59.693 }, 00:16:59.693 "base_bdevs_list": [ 00:16:59.693 { 00:16:59.693 "name": "spare", 00:16:59.693 "uuid": "a4c11203-faa4-5098-99df-c1226af51d16", 00:16:59.693 "is_configured": true, 00:16:59.693 "data_offset": 2048, 00:16:59.693 "data_size": 63488 00:16:59.693 }, 00:16:59.693 { 00:16:59.693 "name": "BaseBdev2", 00:16:59.693 "uuid": "85e8aef7-cc40-506a-93e9-8e505929dbde", 00:16:59.693 "is_configured": true, 00:16:59.693 "data_offset": 2048, 00:16:59.693 "data_size": 63488 00:16:59.693 }, 00:16:59.693 { 00:16:59.693 "name": "BaseBdev3", 00:16:59.693 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:16:59.693 "is_configured": true, 00:16:59.693 "data_offset": 2048, 00:16:59.693 "data_size": 63488 00:16:59.693 }, 00:16:59.693 { 00:16:59.693 "name": "BaseBdev4", 00:16:59.693 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:16:59.693 "is_configured": true, 00:16:59.693 "data_offset": 2048, 00:16:59.693 "data_size": 63488 00:16:59.693 } 00:16:59.693 ] 00:16:59.693 }' 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:59.693 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.693 [2024-12-06 15:43:42.687136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:59.693 [2024-12-06 15:43:42.855722] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.693 15:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.694 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.694 "name": "raid_bdev1", 00:16:59.694 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:16:59.694 "strip_size_kb": 0, 00:16:59.694 "state": "online", 00:16:59.694 "raid_level": "raid1", 00:16:59.694 "superblock": true, 00:16:59.694 "num_base_bdevs": 4, 00:16:59.694 "num_base_bdevs_discovered": 3, 00:16:59.694 "num_base_bdevs_operational": 3, 00:16:59.694 "process": { 00:16:59.694 "type": "rebuild", 00:16:59.694 "target": "spare", 00:16:59.694 "progress": { 00:16:59.694 "blocks": 24576, 00:16:59.694 "percent": 38 00:16:59.694 } 00:16:59.694 }, 00:16:59.694 "base_bdevs_list": [ 00:16:59.694 { 00:16:59.694 "name": "spare", 00:16:59.694 "uuid": "a4c11203-faa4-5098-99df-c1226af51d16", 00:16:59.694 "is_configured": true, 00:16:59.694 "data_offset": 2048, 00:16:59.694 "data_size": 63488 00:16:59.694 }, 00:16:59.694 { 00:16:59.694 "name": null, 00:16:59.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.694 "is_configured": false, 00:16:59.694 "data_offset": 0, 00:16:59.694 "data_size": 63488 00:16:59.694 }, 00:16:59.694 { 00:16:59.694 "name": "BaseBdev3", 00:16:59.694 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:16:59.694 "is_configured": true, 00:16:59.694 "data_offset": 2048, 00:16:59.694 "data_size": 63488 00:16:59.694 }, 00:16:59.694 { 00:16:59.694 "name": "BaseBdev4", 00:16:59.694 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:16:59.694 "is_configured": true, 00:16:59.694 "data_offset": 2048, 00:16:59.694 "data_size": 63488 00:16:59.694 } 00:16:59.694 ] 00:16:59.694 }' 00:16:59.694 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.694 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.694 15:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=471 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.953 "name": "raid_bdev1", 00:16:59.953 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:16:59.953 "strip_size_kb": 0, 00:16:59.953 "state": "online", 00:16:59.953 "raid_level": "raid1", 00:16:59.953 "superblock": true, 00:16:59.953 "num_base_bdevs": 4, 00:16:59.953 "num_base_bdevs_discovered": 3, 00:16:59.953 "num_base_bdevs_operational": 3, 00:16:59.953 "process": { 00:16:59.953 "type": "rebuild", 00:16:59.953 "target": "spare", 00:16:59.953 "progress": { 00:16:59.953 "blocks": 26624, 00:16:59.953 "percent": 41 00:16:59.953 } 00:16:59.953 }, 00:16:59.953 "base_bdevs_list": [ 00:16:59.953 { 00:16:59.953 "name": "spare", 00:16:59.953 "uuid": "a4c11203-faa4-5098-99df-c1226af51d16", 00:16:59.953 "is_configured": true, 00:16:59.953 "data_offset": 2048, 00:16:59.953 "data_size": 63488 00:16:59.953 }, 00:16:59.953 { 00:16:59.953 "name": null, 00:16:59.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.953 "is_configured": false, 00:16:59.953 "data_offset": 0, 00:16:59.953 "data_size": 63488 00:16:59.953 }, 00:16:59.953 { 00:16:59.953 "name": "BaseBdev3", 00:16:59.953 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:16:59.953 "is_configured": true, 00:16:59.953 "data_offset": 2048, 00:16:59.953 "data_size": 63488 00:16:59.953 }, 00:16:59.953 { 00:16:59.953 "name": "BaseBdev4", 00:16:59.953 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:16:59.953 "is_configured": true, 00:16:59.953 "data_offset": 2048, 00:16:59.953 "data_size": 63488 00:16:59.953 } 00:16:59.953 ] 00:16:59.953 }' 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.953 15:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.916 15:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.916 15:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.916 15:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.916 15:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.916 15:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.916 15:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.916 15:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.916 15:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.916 15:43:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.916 15:43:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.916 15:43:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.916 15:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.916 "name": "raid_bdev1", 00:17:00.916 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:00.916 "strip_size_kb": 0, 00:17:00.916 "state": "online", 00:17:00.916 "raid_level": "raid1", 00:17:00.916 "superblock": true, 00:17:00.916 "num_base_bdevs": 4, 00:17:00.916 "num_base_bdevs_discovered": 3, 00:17:00.916 "num_base_bdevs_operational": 3, 00:17:00.916 "process": { 00:17:00.916 "type": "rebuild", 00:17:00.916 "target": "spare", 00:17:00.916 "progress": { 00:17:00.916 "blocks": 49152, 00:17:00.916 "percent": 77 00:17:00.916 } 00:17:00.916 }, 00:17:00.916 "base_bdevs_list": [ 00:17:00.916 { 00:17:00.916 "name": "spare", 00:17:00.916 "uuid": "a4c11203-faa4-5098-99df-c1226af51d16", 00:17:00.916 "is_configured": true, 00:17:00.916 "data_offset": 2048, 00:17:00.916 "data_size": 63488 00:17:00.916 }, 00:17:00.916 { 00:17:00.916 "name": null, 00:17:00.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.916 "is_configured": false, 00:17:00.916 "data_offset": 0, 00:17:00.916 "data_size": 63488 00:17:00.916 }, 00:17:00.916 { 00:17:00.916 "name": "BaseBdev3", 00:17:00.916 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:00.916 "is_configured": true, 00:17:00.916 "data_offset": 2048, 00:17:00.916 "data_size": 63488 00:17:00.916 }, 00:17:00.916 { 00:17:00.916 "name": "BaseBdev4", 00:17:00.916 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:00.916 "is_configured": true, 00:17:00.916 "data_offset": 2048, 00:17:00.916 "data_size": 63488 00:17:00.916 } 00:17:00.916 ] 00:17:00.916 }' 00:17:00.916 15:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.916 15:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.916 15:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.176 15:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.176 15:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:01.744 [2024-12-06 15:43:44.769367] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:01.744 [2024-12-06 15:43:44.769472] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:01.744 [2024-12-06 15:43:44.769647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.009 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.009 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.009 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.009 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.009 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.009 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.009 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.009 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.009 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.009 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.009 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.009 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.009 "name": "raid_bdev1", 00:17:02.009 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:02.009 "strip_size_kb": 0, 00:17:02.009 "state": "online", 00:17:02.009 "raid_level": "raid1", 00:17:02.009 "superblock": true, 00:17:02.009 "num_base_bdevs": 4, 00:17:02.009 "num_base_bdevs_discovered": 3, 00:17:02.009 "num_base_bdevs_operational": 3, 00:17:02.009 "base_bdevs_list": [ 00:17:02.009 { 00:17:02.009 "name": "spare", 00:17:02.009 "uuid": "a4c11203-faa4-5098-99df-c1226af51d16", 00:17:02.009 "is_configured": true, 00:17:02.009 "data_offset": 2048, 00:17:02.009 "data_size": 63488 00:17:02.009 }, 00:17:02.009 { 00:17:02.009 "name": null, 00:17:02.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.009 "is_configured": false, 00:17:02.009 "data_offset": 0, 00:17:02.009 "data_size": 63488 00:17:02.009 }, 00:17:02.009 { 00:17:02.009 "name": "BaseBdev3", 00:17:02.009 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:02.009 "is_configured": true, 00:17:02.009 "data_offset": 2048, 00:17:02.009 "data_size": 63488 00:17:02.009 }, 00:17:02.009 { 00:17:02.009 "name": "BaseBdev4", 00:17:02.009 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:02.009 "is_configured": true, 00:17:02.009 "data_offset": 2048, 00:17:02.009 "data_size": 63488 00:17:02.009 } 00:17:02.009 ] 00:17:02.009 }' 00:17:02.009 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.268 "name": "raid_bdev1", 00:17:02.268 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:02.268 "strip_size_kb": 0, 00:17:02.268 "state": "online", 00:17:02.268 "raid_level": "raid1", 00:17:02.268 "superblock": true, 00:17:02.268 "num_base_bdevs": 4, 00:17:02.268 "num_base_bdevs_discovered": 3, 00:17:02.268 "num_base_bdevs_operational": 3, 00:17:02.268 "base_bdevs_list": [ 00:17:02.268 { 00:17:02.268 "name": "spare", 00:17:02.268 "uuid": "a4c11203-faa4-5098-99df-c1226af51d16", 00:17:02.268 "is_configured": true, 00:17:02.268 "data_offset": 2048, 00:17:02.268 "data_size": 63488 00:17:02.268 }, 00:17:02.268 { 00:17:02.268 "name": null, 00:17:02.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.268 "is_configured": false, 00:17:02.268 "data_offset": 0, 00:17:02.268 "data_size": 63488 00:17:02.268 }, 00:17:02.268 { 00:17:02.268 "name": "BaseBdev3", 00:17:02.268 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:02.268 "is_configured": true, 00:17:02.268 "data_offset": 2048, 00:17:02.268 "data_size": 63488 00:17:02.268 }, 00:17:02.268 { 00:17:02.268 "name": "BaseBdev4", 00:17:02.268 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:02.268 "is_configured": true, 00:17:02.268 "data_offset": 2048, 00:17:02.268 "data_size": 63488 00:17:02.268 } 00:17:02.268 ] 00:17:02.268 }' 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.268 "name": "raid_bdev1", 00:17:02.268 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:02.268 "strip_size_kb": 0, 00:17:02.268 "state": "online", 00:17:02.268 "raid_level": "raid1", 00:17:02.268 "superblock": true, 00:17:02.268 "num_base_bdevs": 4, 00:17:02.268 "num_base_bdevs_discovered": 3, 00:17:02.268 "num_base_bdevs_operational": 3, 00:17:02.268 "base_bdevs_list": [ 00:17:02.268 { 00:17:02.268 "name": "spare", 00:17:02.268 "uuid": "a4c11203-faa4-5098-99df-c1226af51d16", 00:17:02.268 "is_configured": true, 00:17:02.268 "data_offset": 2048, 00:17:02.268 "data_size": 63488 00:17:02.268 }, 00:17:02.268 { 00:17:02.268 "name": null, 00:17:02.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.268 "is_configured": false, 00:17:02.268 "data_offset": 0, 00:17:02.268 "data_size": 63488 00:17:02.268 }, 00:17:02.268 { 00:17:02.268 "name": "BaseBdev3", 00:17:02.268 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:02.268 "is_configured": true, 00:17:02.268 "data_offset": 2048, 00:17:02.268 "data_size": 63488 00:17:02.268 }, 00:17:02.268 { 00:17:02.268 "name": "BaseBdev4", 00:17:02.268 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:02.268 "is_configured": true, 00:17:02.268 "data_offset": 2048, 00:17:02.268 "data_size": 63488 00:17:02.268 } 00:17:02.268 ] 00:17:02.268 }' 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.268 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.835 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.835 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.836 [2024-12-06 15:43:45.935636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.836 [2024-12-06 15:43:45.935798] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.836 [2024-12-06 15:43:45.935932] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.836 [2024-12-06 15:43:45.936036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.836 [2024-12-06 15:43:45.936049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.836 15:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:03.095 /dev/nbd0 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.095 1+0 records in 00:17:03.095 1+0 records out 00:17:03.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281809 s, 14.5 MB/s 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:03.095 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:03.355 /dev/nbd1 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.355 1+0 records in 00:17:03.355 1+0 records out 00:17:03.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450958 s, 9.1 MB/s 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:03.355 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.615 15:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:03.874 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:03.874 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:03.874 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:03.874 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.874 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.874 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:03.874 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:03.875 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.875 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:03.875 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:03.875 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.875 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.875 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.875 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:03.875 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.875 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.875 [2024-12-06 15:43:47.152647] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:03.875 [2024-12-06 15:43:47.152723] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.875 [2024-12-06 15:43:47.152753] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:03.875 [2024-12-06 15:43:47.152766] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.875 [2024-12-06 15:43:47.155669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.875 [2024-12-06 15:43:47.155712] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:03.875 [2024-12-06 15:43:47.155822] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:03.875 [2024-12-06 15:43:47.155880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.875 [2024-12-06 15:43:47.156060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:03.875 [2024-12-06 15:43:47.156170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:03.875 spare 00:17:03.875 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.875 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:03.875 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.875 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.133 [2024-12-06 15:43:47.256112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:04.133 [2024-12-06 15:43:47.256141] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:04.133 [2024-12-06 15:43:47.256500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:17:04.133 [2024-12-06 15:43:47.256728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:04.133 [2024-12-06 15:43:47.256752] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:04.133 [2024-12-06 15:43:47.256928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.133 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.133 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:04.133 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.133 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.133 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.133 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.133 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.133 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.134 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.134 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.134 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.134 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.134 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.134 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.134 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.134 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.134 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.134 "name": "raid_bdev1", 00:17:04.134 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:04.134 "strip_size_kb": 0, 00:17:04.134 "state": "online", 00:17:04.134 "raid_level": "raid1", 00:17:04.134 "superblock": true, 00:17:04.134 "num_base_bdevs": 4, 00:17:04.134 "num_base_bdevs_discovered": 3, 00:17:04.134 "num_base_bdevs_operational": 3, 00:17:04.134 "base_bdevs_list": [ 00:17:04.134 { 00:17:04.134 "name": "spare", 00:17:04.134 "uuid": "a4c11203-faa4-5098-99df-c1226af51d16", 00:17:04.134 "is_configured": true, 00:17:04.134 "data_offset": 2048, 00:17:04.134 "data_size": 63488 00:17:04.134 }, 00:17:04.134 { 00:17:04.134 "name": null, 00:17:04.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.134 "is_configured": false, 00:17:04.134 "data_offset": 2048, 00:17:04.134 "data_size": 63488 00:17:04.134 }, 00:17:04.134 { 00:17:04.134 "name": "BaseBdev3", 00:17:04.134 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:04.134 "is_configured": true, 00:17:04.134 "data_offset": 2048, 00:17:04.134 "data_size": 63488 00:17:04.134 }, 00:17:04.134 { 00:17:04.134 "name": "BaseBdev4", 00:17:04.134 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:04.134 "is_configured": true, 00:17:04.134 "data_offset": 2048, 00:17:04.134 "data_size": 63488 00:17:04.134 } 00:17:04.134 ] 00:17:04.134 }' 00:17:04.134 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.134 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.393 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:04.393 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.393 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:04.393 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:04.393 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.393 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.393 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.393 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.393 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.393 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.652 "name": "raid_bdev1", 00:17:04.652 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:04.652 "strip_size_kb": 0, 00:17:04.652 "state": "online", 00:17:04.652 "raid_level": "raid1", 00:17:04.652 "superblock": true, 00:17:04.652 "num_base_bdevs": 4, 00:17:04.652 "num_base_bdevs_discovered": 3, 00:17:04.652 "num_base_bdevs_operational": 3, 00:17:04.652 "base_bdevs_list": [ 00:17:04.652 { 00:17:04.652 "name": "spare", 00:17:04.652 "uuid": "a4c11203-faa4-5098-99df-c1226af51d16", 00:17:04.652 "is_configured": true, 00:17:04.652 "data_offset": 2048, 00:17:04.652 "data_size": 63488 00:17:04.652 }, 00:17:04.652 { 00:17:04.652 "name": null, 00:17:04.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.652 "is_configured": false, 00:17:04.652 "data_offset": 2048, 00:17:04.652 "data_size": 63488 00:17:04.652 }, 00:17:04.652 { 00:17:04.652 "name": "BaseBdev3", 00:17:04.652 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:04.652 "is_configured": true, 00:17:04.652 "data_offset": 2048, 00:17:04.652 "data_size": 63488 00:17:04.652 }, 00:17:04.652 { 00:17:04.652 "name": "BaseBdev4", 00:17:04.652 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:04.652 "is_configured": true, 00:17:04.652 "data_offset": 2048, 00:17:04.652 "data_size": 63488 00:17:04.652 } 00:17:04.652 ] 00:17:04.652 }' 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.652 [2024-12-06 15:43:47.820123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.652 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.652 "name": "raid_bdev1", 00:17:04.652 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:04.653 "strip_size_kb": 0, 00:17:04.653 "state": "online", 00:17:04.653 "raid_level": "raid1", 00:17:04.653 "superblock": true, 00:17:04.653 "num_base_bdevs": 4, 00:17:04.653 "num_base_bdevs_discovered": 2, 00:17:04.653 "num_base_bdevs_operational": 2, 00:17:04.653 "base_bdevs_list": [ 00:17:04.653 { 00:17:04.653 "name": null, 00:17:04.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.653 "is_configured": false, 00:17:04.653 "data_offset": 0, 00:17:04.653 "data_size": 63488 00:17:04.653 }, 00:17:04.653 { 00:17:04.653 "name": null, 00:17:04.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.653 "is_configured": false, 00:17:04.653 "data_offset": 2048, 00:17:04.653 "data_size": 63488 00:17:04.653 }, 00:17:04.653 { 00:17:04.653 "name": "BaseBdev3", 00:17:04.653 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:04.653 "is_configured": true, 00:17:04.653 "data_offset": 2048, 00:17:04.653 "data_size": 63488 00:17:04.653 }, 00:17:04.653 { 00:17:04.653 "name": "BaseBdev4", 00:17:04.653 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:04.653 "is_configured": true, 00:17:04.653 "data_offset": 2048, 00:17:04.653 "data_size": 63488 00:17:04.653 } 00:17:04.653 ] 00:17:04.653 }' 00:17:04.653 15:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.653 15:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.220 15:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:05.220 15:43:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.220 15:43:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.220 [2024-12-06 15:43:48.231640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.220 [2024-12-06 15:43:48.231882] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:05.220 [2024-12-06 15:43:48.231910] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:05.220 [2024-12-06 15:43:48.231956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.220 [2024-12-06 15:43:48.247406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:17:05.220 15:43:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.220 15:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:05.220 [2024-12-06 15:43:48.249918] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.158 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.158 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.158 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.158 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.158 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.158 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.158 15:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.158 15:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.158 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.158 15:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.159 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.159 "name": "raid_bdev1", 00:17:06.159 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:06.159 "strip_size_kb": 0, 00:17:06.159 "state": "online", 00:17:06.159 "raid_level": "raid1", 00:17:06.159 "superblock": true, 00:17:06.159 "num_base_bdevs": 4, 00:17:06.159 "num_base_bdevs_discovered": 3, 00:17:06.159 "num_base_bdevs_operational": 3, 00:17:06.159 "process": { 00:17:06.159 "type": "rebuild", 00:17:06.159 "target": "spare", 00:17:06.159 "progress": { 00:17:06.159 "blocks": 20480, 00:17:06.159 "percent": 32 00:17:06.159 } 00:17:06.159 }, 00:17:06.159 "base_bdevs_list": [ 00:17:06.159 { 00:17:06.159 "name": "spare", 00:17:06.159 "uuid": "a4c11203-faa4-5098-99df-c1226af51d16", 00:17:06.159 "is_configured": true, 00:17:06.159 "data_offset": 2048, 00:17:06.159 "data_size": 63488 00:17:06.159 }, 00:17:06.159 { 00:17:06.159 "name": null, 00:17:06.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.159 "is_configured": false, 00:17:06.159 "data_offset": 2048, 00:17:06.159 "data_size": 63488 00:17:06.159 }, 00:17:06.159 { 00:17:06.159 "name": "BaseBdev3", 00:17:06.159 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:06.159 "is_configured": true, 00:17:06.159 "data_offset": 2048, 00:17:06.159 "data_size": 63488 00:17:06.159 }, 00:17:06.159 { 00:17:06.159 "name": "BaseBdev4", 00:17:06.159 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:06.159 "is_configured": true, 00:17:06.159 "data_offset": 2048, 00:17:06.159 "data_size": 63488 00:17:06.159 } 00:17:06.159 ] 00:17:06.159 }' 00:17:06.159 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.159 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.159 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.159 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.159 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:06.159 15:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.159 15:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.159 [2024-12-06 15:43:49.389878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.418 [2024-12-06 15:43:49.458526] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:06.418 [2024-12-06 15:43:49.458606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.418 [2024-12-06 15:43:49.458628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.418 [2024-12-06 15:43:49.458637] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.418 "name": "raid_bdev1", 00:17:06.418 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:06.418 "strip_size_kb": 0, 00:17:06.418 "state": "online", 00:17:06.418 "raid_level": "raid1", 00:17:06.418 "superblock": true, 00:17:06.418 "num_base_bdevs": 4, 00:17:06.418 "num_base_bdevs_discovered": 2, 00:17:06.418 "num_base_bdevs_operational": 2, 00:17:06.418 "base_bdevs_list": [ 00:17:06.418 { 00:17:06.418 "name": null, 00:17:06.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.418 "is_configured": false, 00:17:06.418 "data_offset": 0, 00:17:06.418 "data_size": 63488 00:17:06.418 }, 00:17:06.418 { 00:17:06.418 "name": null, 00:17:06.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.418 "is_configured": false, 00:17:06.418 "data_offset": 2048, 00:17:06.418 "data_size": 63488 00:17:06.418 }, 00:17:06.418 { 00:17:06.418 "name": "BaseBdev3", 00:17:06.418 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:06.418 "is_configured": true, 00:17:06.418 "data_offset": 2048, 00:17:06.418 "data_size": 63488 00:17:06.418 }, 00:17:06.418 { 00:17:06.418 "name": "BaseBdev4", 00:17:06.418 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:06.418 "is_configured": true, 00:17:06.418 "data_offset": 2048, 00:17:06.418 "data_size": 63488 00:17:06.418 } 00:17:06.418 ] 00:17:06.418 }' 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.418 15:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.678 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:06.678 15:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.678 15:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.678 [2024-12-06 15:43:49.893583] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:06.678 [2024-12-06 15:43:49.893664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.678 [2024-12-06 15:43:49.893700] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:06.678 [2024-12-06 15:43:49.893714] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.678 [2024-12-06 15:43:49.894317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.678 [2024-12-06 15:43:49.894356] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:06.678 [2024-12-06 15:43:49.894477] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:06.678 [2024-12-06 15:43:49.894494] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:06.678 [2024-12-06 15:43:49.894526] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:06.678 [2024-12-06 15:43:49.894560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:06.678 [2024-12-06 15:43:49.909136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:17:06.678 spare 00:17:06.678 15:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.678 15:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:06.678 [2024-12-06 15:43:49.911625] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:08.057 15:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.057 15:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.057 15:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.057 15:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.057 15:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.057 15:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.057 15:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.057 15:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.057 15:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.057 15:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.057 15:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.057 "name": "raid_bdev1", 00:17:08.057 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:08.057 "strip_size_kb": 0, 00:17:08.057 "state": "online", 00:17:08.057 "raid_level": "raid1", 00:17:08.057 "superblock": true, 00:17:08.057 "num_base_bdevs": 4, 00:17:08.057 "num_base_bdevs_discovered": 3, 00:17:08.057 "num_base_bdevs_operational": 3, 00:17:08.057 "process": { 00:17:08.057 "type": "rebuild", 00:17:08.057 "target": "spare", 00:17:08.057 "progress": { 00:17:08.057 "blocks": 20480, 00:17:08.057 "percent": 32 00:17:08.057 } 00:17:08.057 }, 00:17:08.057 "base_bdevs_list": [ 00:17:08.057 { 00:17:08.057 "name": "spare", 00:17:08.057 "uuid": "a4c11203-faa4-5098-99df-c1226af51d16", 00:17:08.057 "is_configured": true, 00:17:08.057 "data_offset": 2048, 00:17:08.057 "data_size": 63488 00:17:08.057 }, 00:17:08.057 { 00:17:08.057 "name": null, 00:17:08.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.057 "is_configured": false, 00:17:08.057 "data_offset": 2048, 00:17:08.057 "data_size": 63488 00:17:08.057 }, 00:17:08.057 { 00:17:08.057 "name": "BaseBdev3", 00:17:08.057 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:08.057 "is_configured": true, 00:17:08.057 "data_offset": 2048, 00:17:08.057 "data_size": 63488 00:17:08.057 }, 00:17:08.057 { 00:17:08.057 "name": "BaseBdev4", 00:17:08.057 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:08.057 "is_configured": true, 00:17:08.057 "data_offset": 2048, 00:17:08.057 "data_size": 63488 00:17:08.057 } 00:17:08.057 ] 00:17:08.057 }' 00:17:08.057 15:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.057 [2024-12-06 15:43:51.062964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.057 [2024-12-06 15:43:51.120436] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:08.057 [2024-12-06 15:43:51.120530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.057 [2024-12-06 15:43:51.120550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.057 [2024-12-06 15:43:51.120562] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.057 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.057 "name": "raid_bdev1", 00:17:08.057 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:08.057 "strip_size_kb": 0, 00:17:08.057 "state": "online", 00:17:08.057 "raid_level": "raid1", 00:17:08.057 "superblock": true, 00:17:08.057 "num_base_bdevs": 4, 00:17:08.057 "num_base_bdevs_discovered": 2, 00:17:08.057 "num_base_bdevs_operational": 2, 00:17:08.057 "base_bdevs_list": [ 00:17:08.057 { 00:17:08.057 "name": null, 00:17:08.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.057 "is_configured": false, 00:17:08.057 "data_offset": 0, 00:17:08.057 "data_size": 63488 00:17:08.057 }, 00:17:08.057 { 00:17:08.058 "name": null, 00:17:08.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.058 "is_configured": false, 00:17:08.058 "data_offset": 2048, 00:17:08.058 "data_size": 63488 00:17:08.058 }, 00:17:08.058 { 00:17:08.058 "name": "BaseBdev3", 00:17:08.058 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:08.058 "is_configured": true, 00:17:08.058 "data_offset": 2048, 00:17:08.058 "data_size": 63488 00:17:08.058 }, 00:17:08.058 { 00:17:08.058 "name": "BaseBdev4", 00:17:08.058 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:08.058 "is_configured": true, 00:17:08.058 "data_offset": 2048, 00:17:08.058 "data_size": 63488 00:17:08.058 } 00:17:08.058 ] 00:17:08.058 }' 00:17:08.058 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.058 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.317 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.317 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.317 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.317 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.317 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.317 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.317 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.317 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.317 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.317 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.576 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.576 "name": "raid_bdev1", 00:17:08.576 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:08.576 "strip_size_kb": 0, 00:17:08.576 "state": "online", 00:17:08.576 "raid_level": "raid1", 00:17:08.576 "superblock": true, 00:17:08.576 "num_base_bdevs": 4, 00:17:08.576 "num_base_bdevs_discovered": 2, 00:17:08.576 "num_base_bdevs_operational": 2, 00:17:08.576 "base_bdevs_list": [ 00:17:08.576 { 00:17:08.576 "name": null, 00:17:08.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.576 "is_configured": false, 00:17:08.576 "data_offset": 0, 00:17:08.576 "data_size": 63488 00:17:08.576 }, 00:17:08.576 { 00:17:08.576 "name": null, 00:17:08.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.576 "is_configured": false, 00:17:08.576 "data_offset": 2048, 00:17:08.576 "data_size": 63488 00:17:08.576 }, 00:17:08.576 { 00:17:08.576 "name": "BaseBdev3", 00:17:08.576 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:08.576 "is_configured": true, 00:17:08.576 "data_offset": 2048, 00:17:08.576 "data_size": 63488 00:17:08.576 }, 00:17:08.576 { 00:17:08.576 "name": "BaseBdev4", 00:17:08.576 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:08.576 "is_configured": true, 00:17:08.576 "data_offset": 2048, 00:17:08.576 "data_size": 63488 00:17:08.576 } 00:17:08.576 ] 00:17:08.576 }' 00:17:08.576 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.576 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.576 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.576 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.576 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:08.576 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.576 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.576 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.576 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:08.576 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.576 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.576 [2024-12-06 15:43:51.719270] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:08.576 [2024-12-06 15:43:51.719343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.576 [2024-12-06 15:43:51.719369] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:08.576 [2024-12-06 15:43:51.719385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.576 [2024-12-06 15:43:51.719942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.576 [2024-12-06 15:43:51.719977] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:08.576 [2024-12-06 15:43:51.720072] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:08.576 [2024-12-06 15:43:51.720091] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:08.576 [2024-12-06 15:43:51.720102] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:08.576 [2024-12-06 15:43:51.720137] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:08.576 BaseBdev1 00:17:08.576 15:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.576 15:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.513 "name": "raid_bdev1", 00:17:09.513 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:09.513 "strip_size_kb": 0, 00:17:09.513 "state": "online", 00:17:09.513 "raid_level": "raid1", 00:17:09.513 "superblock": true, 00:17:09.513 "num_base_bdevs": 4, 00:17:09.513 "num_base_bdevs_discovered": 2, 00:17:09.513 "num_base_bdevs_operational": 2, 00:17:09.513 "base_bdevs_list": [ 00:17:09.513 { 00:17:09.513 "name": null, 00:17:09.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.513 "is_configured": false, 00:17:09.513 "data_offset": 0, 00:17:09.513 "data_size": 63488 00:17:09.513 }, 00:17:09.513 { 00:17:09.513 "name": null, 00:17:09.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.513 "is_configured": false, 00:17:09.513 "data_offset": 2048, 00:17:09.513 "data_size": 63488 00:17:09.513 }, 00:17:09.513 { 00:17:09.513 "name": "BaseBdev3", 00:17:09.513 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:09.513 "is_configured": true, 00:17:09.513 "data_offset": 2048, 00:17:09.513 "data_size": 63488 00:17:09.513 }, 00:17:09.513 { 00:17:09.513 "name": "BaseBdev4", 00:17:09.513 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:09.513 "is_configured": true, 00:17:09.513 "data_offset": 2048, 00:17:09.513 "data_size": 63488 00:17:09.513 } 00:17:09.513 ] 00:17:09.513 }' 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.513 15:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.109 "name": "raid_bdev1", 00:17:10.109 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:10.109 "strip_size_kb": 0, 00:17:10.109 "state": "online", 00:17:10.109 "raid_level": "raid1", 00:17:10.109 "superblock": true, 00:17:10.109 "num_base_bdevs": 4, 00:17:10.109 "num_base_bdevs_discovered": 2, 00:17:10.109 "num_base_bdevs_operational": 2, 00:17:10.109 "base_bdevs_list": [ 00:17:10.109 { 00:17:10.109 "name": null, 00:17:10.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.109 "is_configured": false, 00:17:10.109 "data_offset": 0, 00:17:10.109 "data_size": 63488 00:17:10.109 }, 00:17:10.109 { 00:17:10.109 "name": null, 00:17:10.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.109 "is_configured": false, 00:17:10.109 "data_offset": 2048, 00:17:10.109 "data_size": 63488 00:17:10.109 }, 00:17:10.109 { 00:17:10.109 "name": "BaseBdev3", 00:17:10.109 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:10.109 "is_configured": true, 00:17:10.109 "data_offset": 2048, 00:17:10.109 "data_size": 63488 00:17:10.109 }, 00:17:10.109 { 00:17:10.109 "name": "BaseBdev4", 00:17:10.109 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:10.109 "is_configured": true, 00:17:10.109 "data_offset": 2048, 00:17:10.109 "data_size": 63488 00:17:10.109 } 00:17:10.109 ] 00:17:10.109 }' 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.109 [2024-12-06 15:43:53.261337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:10.109 [2024-12-06 15:43:53.261609] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:10.109 [2024-12-06 15:43:53.261626] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:10.109 request: 00:17:10.109 { 00:17:10.109 "base_bdev": "BaseBdev1", 00:17:10.109 "raid_bdev": "raid_bdev1", 00:17:10.109 "method": "bdev_raid_add_base_bdev", 00:17:10.109 "req_id": 1 00:17:10.109 } 00:17:10.109 Got JSON-RPC error response 00:17:10.109 response: 00:17:10.109 { 00:17:10.109 "code": -22, 00:17:10.109 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:10.109 } 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.109 15:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.047 "name": "raid_bdev1", 00:17:11.047 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:11.047 "strip_size_kb": 0, 00:17:11.047 "state": "online", 00:17:11.047 "raid_level": "raid1", 00:17:11.047 "superblock": true, 00:17:11.047 "num_base_bdevs": 4, 00:17:11.047 "num_base_bdevs_discovered": 2, 00:17:11.047 "num_base_bdevs_operational": 2, 00:17:11.047 "base_bdevs_list": [ 00:17:11.047 { 00:17:11.047 "name": null, 00:17:11.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.047 "is_configured": false, 00:17:11.047 "data_offset": 0, 00:17:11.047 "data_size": 63488 00:17:11.047 }, 00:17:11.047 { 00:17:11.047 "name": null, 00:17:11.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.047 "is_configured": false, 00:17:11.047 "data_offset": 2048, 00:17:11.047 "data_size": 63488 00:17:11.047 }, 00:17:11.047 { 00:17:11.047 "name": "BaseBdev3", 00:17:11.047 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:11.047 "is_configured": true, 00:17:11.047 "data_offset": 2048, 00:17:11.047 "data_size": 63488 00:17:11.047 }, 00:17:11.047 { 00:17:11.047 "name": "BaseBdev4", 00:17:11.047 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:11.047 "is_configured": true, 00:17:11.047 "data_offset": 2048, 00:17:11.047 "data_size": 63488 00:17:11.047 } 00:17:11.047 ] 00:17:11.047 }' 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.047 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.614 "name": "raid_bdev1", 00:17:11.614 "uuid": "ded471da-faa2-474b-a08f-b4e45bfd4f1b", 00:17:11.614 "strip_size_kb": 0, 00:17:11.614 "state": "online", 00:17:11.614 "raid_level": "raid1", 00:17:11.614 "superblock": true, 00:17:11.614 "num_base_bdevs": 4, 00:17:11.614 "num_base_bdevs_discovered": 2, 00:17:11.614 "num_base_bdevs_operational": 2, 00:17:11.614 "base_bdevs_list": [ 00:17:11.614 { 00:17:11.614 "name": null, 00:17:11.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.614 "is_configured": false, 00:17:11.614 "data_offset": 0, 00:17:11.614 "data_size": 63488 00:17:11.614 }, 00:17:11.614 { 00:17:11.614 "name": null, 00:17:11.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.614 "is_configured": false, 00:17:11.614 "data_offset": 2048, 00:17:11.614 "data_size": 63488 00:17:11.614 }, 00:17:11.614 { 00:17:11.614 "name": "BaseBdev3", 00:17:11.614 "uuid": "6dcc1d4e-100b-5280-94a9-889dea4b0fb6", 00:17:11.614 "is_configured": true, 00:17:11.614 "data_offset": 2048, 00:17:11.614 "data_size": 63488 00:17:11.614 }, 00:17:11.614 { 00:17:11.614 "name": "BaseBdev4", 00:17:11.614 "uuid": "d27a7bb5-9c41-501d-92ef-ee46b870697f", 00:17:11.614 "is_configured": true, 00:17:11.614 "data_offset": 2048, 00:17:11.614 "data_size": 63488 00:17:11.614 } 00:17:11.614 ] 00:17:11.614 }' 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78018 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78018 ']' 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78018 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78018 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:11.614 killing process with pid 78018 00:17:11.614 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78018' 00:17:11.615 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78018 00:17:11.615 Received shutdown signal, test time was about 60.000000 seconds 00:17:11.615 00:17:11.615 Latency(us) 00:17:11.615 [2024-12-06T15:43:54.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.615 [2024-12-06T15:43:54.910Z] =================================================================================================================== 00:17:11.615 [2024-12-06T15:43:54.910Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:11.615 [2024-12-06 15:43:54.862803] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:11.615 15:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78018 00:17:11.615 [2024-12-06 15:43:54.862951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.615 [2024-12-06 15:43:54.863034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.615 [2024-12-06 15:43:54.863046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:12.181 [2024-12-06 15:43:55.385720] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:13.555 00:17:13.555 real 0m24.860s 00:17:13.555 user 0m29.638s 00:17:13.555 sys 0m4.293s 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.555 ************************************ 00:17:13.555 END TEST raid_rebuild_test_sb 00:17:13.555 ************************************ 00:17:13.555 15:43:56 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:17:13.555 15:43:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:13.555 15:43:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.555 15:43:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.555 ************************************ 00:17:13.555 START TEST raid_rebuild_test_io 00:17:13.555 ************************************ 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78777 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78777 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78777 ']' 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.555 15:43:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.555 [2024-12-06 15:43:56.806626] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:17:13.555 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:13.555 Zero copy mechanism will not be used. 00:17:13.555 [2024-12-06 15:43:56.806772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78777 ] 00:17:13.813 [2024-12-06 15:43:56.984158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.071 [2024-12-06 15:43:57.114284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.071 [2024-12-06 15:43:57.354089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.071 [2024-12-06 15:43:57.354172] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.635 BaseBdev1_malloc 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.635 [2024-12-06 15:43:57.704576] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:14.635 [2024-12-06 15:43:57.704652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.635 [2024-12-06 15:43:57.704680] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:14.635 [2024-12-06 15:43:57.704696] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.635 [2024-12-06 15:43:57.707494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.635 [2024-12-06 15:43:57.707559] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:14.635 BaseBdev1 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.635 BaseBdev2_malloc 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.635 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.635 [2024-12-06 15:43:57.768725] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:14.635 [2024-12-06 15:43:57.768794] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.636 [2024-12-06 15:43:57.768824] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:14.636 [2024-12-06 15:43:57.768840] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.636 [2024-12-06 15:43:57.771544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.636 [2024-12-06 15:43:57.771588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:14.636 BaseBdev2 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.636 BaseBdev3_malloc 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.636 [2024-12-06 15:43:57.846939] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:14.636 [2024-12-06 15:43:57.847002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.636 [2024-12-06 15:43:57.847028] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:14.636 [2024-12-06 15:43:57.847043] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.636 [2024-12-06 15:43:57.849710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.636 [2024-12-06 15:43:57.849754] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:14.636 BaseBdev3 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.636 BaseBdev4_malloc 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.636 [2024-12-06 15:43:57.910649] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:14.636 [2024-12-06 15:43:57.910716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.636 [2024-12-06 15:43:57.910739] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:14.636 [2024-12-06 15:43:57.910755] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.636 [2024-12-06 15:43:57.913346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.636 [2024-12-06 15:43:57.913392] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:14.636 BaseBdev4 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.636 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.895 spare_malloc 00:17:14.895 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.895 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:14.895 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.895 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.895 spare_delay 00:17:14.895 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.895 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:14.895 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.895 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.895 [2024-12-06 15:43:57.985979] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:14.895 [2024-12-06 15:43:57.986037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.895 [2024-12-06 15:43:57.986057] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:14.895 [2024-12-06 15:43:57.986071] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.895 [2024-12-06 15:43:57.988753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.895 [2024-12-06 15:43:57.988795] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:14.895 spare 00:17:14.895 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.895 15:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:14.895 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.895 15:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.895 [2024-12-06 15:43:57.998022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.895 [2024-12-06 15:43:58.000378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.895 [2024-12-06 15:43:58.000448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:14.895 [2024-12-06 15:43:58.000511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:14.895 [2024-12-06 15:43:58.000591] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:14.895 [2024-12-06 15:43:58.000606] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:14.895 [2024-12-06 15:43:58.000902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:14.895 [2024-12-06 15:43:58.001084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:14.895 [2024-12-06 15:43:58.001099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:14.895 [2024-12-06 15:43:58.001256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.895 "name": "raid_bdev1", 00:17:14.895 "uuid": "287c6485-5b75-402a-94ca-9d9d435fb01f", 00:17:14.895 "strip_size_kb": 0, 00:17:14.895 "state": "online", 00:17:14.895 "raid_level": "raid1", 00:17:14.895 "superblock": false, 00:17:14.895 "num_base_bdevs": 4, 00:17:14.895 "num_base_bdevs_discovered": 4, 00:17:14.895 "num_base_bdevs_operational": 4, 00:17:14.895 "base_bdevs_list": [ 00:17:14.895 { 00:17:14.895 "name": "BaseBdev1", 00:17:14.895 "uuid": "8a31797b-a7bc-5e25-9c3e-4026bbced272", 00:17:14.895 "is_configured": true, 00:17:14.895 "data_offset": 0, 00:17:14.895 "data_size": 65536 00:17:14.895 }, 00:17:14.895 { 00:17:14.895 "name": "BaseBdev2", 00:17:14.895 "uuid": "2a2da0e4-6891-5f1d-a6f7-86670d49f145", 00:17:14.895 "is_configured": true, 00:17:14.895 "data_offset": 0, 00:17:14.895 "data_size": 65536 00:17:14.895 }, 00:17:14.895 { 00:17:14.895 "name": "BaseBdev3", 00:17:14.895 "uuid": "ec695b09-745d-5fc8-8b7f-5c6c9d666dc0", 00:17:14.895 "is_configured": true, 00:17:14.895 "data_offset": 0, 00:17:14.895 "data_size": 65536 00:17:14.895 }, 00:17:14.895 { 00:17:14.895 "name": "BaseBdev4", 00:17:14.895 "uuid": "75a30f6d-a83a-5ddd-a996-95bf0d8fa3fa", 00:17:14.895 "is_configured": true, 00:17:14.895 "data_offset": 0, 00:17:14.895 "data_size": 65536 00:17:14.895 } 00:17:14.895 ] 00:17:14.895 }' 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.895 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.153 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:15.153 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:15.153 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.153 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.153 [2024-12-06 15:43:58.413846] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.411 [2024-12-06 15:43:58.493356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.411 "name": "raid_bdev1", 00:17:15.411 "uuid": "287c6485-5b75-402a-94ca-9d9d435fb01f", 00:17:15.411 "strip_size_kb": 0, 00:17:15.411 "state": "online", 00:17:15.411 "raid_level": "raid1", 00:17:15.411 "superblock": false, 00:17:15.411 "num_base_bdevs": 4, 00:17:15.411 "num_base_bdevs_discovered": 3, 00:17:15.411 "num_base_bdevs_operational": 3, 00:17:15.411 "base_bdevs_list": [ 00:17:15.411 { 00:17:15.411 "name": null, 00:17:15.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.411 "is_configured": false, 00:17:15.411 "data_offset": 0, 00:17:15.411 "data_size": 65536 00:17:15.411 }, 00:17:15.411 { 00:17:15.411 "name": "BaseBdev2", 00:17:15.411 "uuid": "2a2da0e4-6891-5f1d-a6f7-86670d49f145", 00:17:15.411 "is_configured": true, 00:17:15.411 "data_offset": 0, 00:17:15.411 "data_size": 65536 00:17:15.411 }, 00:17:15.411 { 00:17:15.411 "name": "BaseBdev3", 00:17:15.411 "uuid": "ec695b09-745d-5fc8-8b7f-5c6c9d666dc0", 00:17:15.411 "is_configured": true, 00:17:15.411 "data_offset": 0, 00:17:15.411 "data_size": 65536 00:17:15.411 }, 00:17:15.411 { 00:17:15.411 "name": "BaseBdev4", 00:17:15.411 "uuid": "75a30f6d-a83a-5ddd-a996-95bf0d8fa3fa", 00:17:15.411 "is_configured": true, 00:17:15.411 "data_offset": 0, 00:17:15.411 "data_size": 65536 00:17:15.411 } 00:17:15.411 ] 00:17:15.411 }' 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.411 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.411 [2024-12-06 15:43:58.574095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:15.411 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:15.411 Zero copy mechanism will not be used. 00:17:15.411 Running I/O for 60 seconds... 00:17:15.670 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:15.670 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.670 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.670 [2024-12-06 15:43:58.925059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.927 15:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.927 15:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:15.927 [2024-12-06 15:43:58.984818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:15.927 [2024-12-06 15:43:58.987442] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:15.928 [2024-12-06 15:43:59.119252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:15.928 [2024-12-06 15:43:59.121636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:16.185 [2024-12-06 15:43:59.351224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:16.185 [2024-12-06 15:43:59.352150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:16.442 133.00 IOPS, 399.00 MiB/s [2024-12-06T15:43:59.737Z] [2024-12-06 15:43:59.680297] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:16.700 [2024-12-06 15:43:59.902192] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:16.700 [2024-12-06 15:43:59.902480] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:16.700 15:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.700 15:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.700 15:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.700 15:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.700 15:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.700 15:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.700 15:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.700 15:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:16.700 15:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.958 15:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.958 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.958 "name": "raid_bdev1", 00:17:16.958 "uuid": "287c6485-5b75-402a-94ca-9d9d435fb01f", 00:17:16.958 "strip_size_kb": 0, 00:17:16.958 "state": "online", 00:17:16.958 "raid_level": "raid1", 00:17:16.958 "superblock": false, 00:17:16.958 "num_base_bdevs": 4, 00:17:16.958 "num_base_bdevs_discovered": 4, 00:17:16.958 "num_base_bdevs_operational": 4, 00:17:16.958 "process": { 00:17:16.958 "type": "rebuild", 00:17:16.958 "target": "spare", 00:17:16.958 "progress": { 00:17:16.958 "blocks": 10240, 00:17:16.958 "percent": 15 00:17:16.958 } 00:17:16.958 }, 00:17:16.958 "base_bdevs_list": [ 00:17:16.958 { 00:17:16.958 "name": "spare", 00:17:16.958 "uuid": "3935c0e3-552e-5ead-9b2d-b72d88d36351", 00:17:16.958 "is_configured": true, 00:17:16.958 "data_offset": 0, 00:17:16.958 "data_size": 65536 00:17:16.958 }, 00:17:16.958 { 00:17:16.958 "name": "BaseBdev2", 00:17:16.958 "uuid": "2a2da0e4-6891-5f1d-a6f7-86670d49f145", 00:17:16.958 "is_configured": true, 00:17:16.958 "data_offset": 0, 00:17:16.958 "data_size": 65536 00:17:16.958 }, 00:17:16.958 { 00:17:16.958 "name": "BaseBdev3", 00:17:16.958 "uuid": "ec695b09-745d-5fc8-8b7f-5c6c9d666dc0", 00:17:16.958 "is_configured": true, 00:17:16.958 "data_offset": 0, 00:17:16.958 "data_size": 65536 00:17:16.958 }, 00:17:16.958 { 00:17:16.958 "name": "BaseBdev4", 00:17:16.958 "uuid": "75a30f6d-a83a-5ddd-a996-95bf0d8fa3fa", 00:17:16.958 "is_configured": true, 00:17:16.958 "data_offset": 0, 00:17:16.958 "data_size": 65536 00:17:16.958 } 00:17:16.958 ] 00:17:16.958 }' 00:17:16.958 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.958 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.958 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.958 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.958 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:16.959 15:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.959 15:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:16.959 [2024-12-06 15:44:00.121206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.959 [2024-12-06 15:44:00.251106] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.217 [2024-12-06 15:44:00.260284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.217 [2024-12-06 15:44:00.260361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.217 [2024-12-06 15:44:00.260377] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.217 [2024-12-06 15:44:00.305646] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.217 "name": "raid_bdev1", 00:17:17.217 "uuid": "287c6485-5b75-402a-94ca-9d9d435fb01f", 00:17:17.217 "strip_size_kb": 0, 00:17:17.217 "state": "online", 00:17:17.217 "raid_level": "raid1", 00:17:17.217 "superblock": false, 00:17:17.217 "num_base_bdevs": 4, 00:17:17.217 "num_base_bdevs_discovered": 3, 00:17:17.217 "num_base_bdevs_operational": 3, 00:17:17.217 "base_bdevs_list": [ 00:17:17.217 { 00:17:17.217 "name": null, 00:17:17.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.217 "is_configured": false, 00:17:17.217 "data_offset": 0, 00:17:17.217 "data_size": 65536 00:17:17.217 }, 00:17:17.217 { 00:17:17.217 "name": "BaseBdev2", 00:17:17.217 "uuid": "2a2da0e4-6891-5f1d-a6f7-86670d49f145", 00:17:17.217 "is_configured": true, 00:17:17.217 "data_offset": 0, 00:17:17.217 "data_size": 65536 00:17:17.217 }, 00:17:17.217 { 00:17:17.217 "name": "BaseBdev3", 00:17:17.217 "uuid": "ec695b09-745d-5fc8-8b7f-5c6c9d666dc0", 00:17:17.217 "is_configured": true, 00:17:17.217 "data_offset": 0, 00:17:17.217 "data_size": 65536 00:17:17.217 }, 00:17:17.217 { 00:17:17.217 "name": "BaseBdev4", 00:17:17.217 "uuid": "75a30f6d-a83a-5ddd-a996-95bf0d8fa3fa", 00:17:17.217 "is_configured": true, 00:17:17.217 "data_offset": 0, 00:17:17.217 "data_size": 65536 00:17:17.217 } 00:17:17.217 ] 00:17:17.217 }' 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.217 15:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.476 144.50 IOPS, 433.50 MiB/s [2024-12-06T15:44:00.771Z] 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.476 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.476 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.476 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.476 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.476 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.476 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.476 15:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.476 15:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.476 15:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.476 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.476 "name": "raid_bdev1", 00:17:17.476 "uuid": "287c6485-5b75-402a-94ca-9d9d435fb01f", 00:17:17.476 "strip_size_kb": 0, 00:17:17.476 "state": "online", 00:17:17.476 "raid_level": "raid1", 00:17:17.476 "superblock": false, 00:17:17.476 "num_base_bdevs": 4, 00:17:17.476 "num_base_bdevs_discovered": 3, 00:17:17.476 "num_base_bdevs_operational": 3, 00:17:17.476 "base_bdevs_list": [ 00:17:17.476 { 00:17:17.476 "name": null, 00:17:17.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.476 "is_configured": false, 00:17:17.476 "data_offset": 0, 00:17:17.476 "data_size": 65536 00:17:17.476 }, 00:17:17.476 { 00:17:17.476 "name": "BaseBdev2", 00:17:17.476 "uuid": "2a2da0e4-6891-5f1d-a6f7-86670d49f145", 00:17:17.476 "is_configured": true, 00:17:17.476 "data_offset": 0, 00:17:17.476 "data_size": 65536 00:17:17.476 }, 00:17:17.476 { 00:17:17.476 "name": "BaseBdev3", 00:17:17.476 "uuid": "ec695b09-745d-5fc8-8b7f-5c6c9d666dc0", 00:17:17.476 "is_configured": true, 00:17:17.476 "data_offset": 0, 00:17:17.476 "data_size": 65536 00:17:17.476 }, 00:17:17.476 { 00:17:17.476 "name": "BaseBdev4", 00:17:17.476 "uuid": "75a30f6d-a83a-5ddd-a996-95bf0d8fa3fa", 00:17:17.476 "is_configured": true, 00:17:17.476 "data_offset": 0, 00:17:17.476 "data_size": 65536 00:17:17.476 } 00:17:17.476 ] 00:17:17.476 }' 00:17:17.476 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.735 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.735 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.735 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.735 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:17.735 15:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.735 15:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.735 [2024-12-06 15:44:00.858664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.735 15:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.735 15:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:17.735 [2024-12-06 15:44:00.930152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:17.735 [2024-12-06 15:44:00.932754] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:18.045 [2024-12-06 15:44:01.058161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:18.045 [2024-12-06 15:44:01.060343] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:18.045 [2024-12-06 15:44:01.272760] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:18.045 [2024-12-06 15:44:01.273061] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:18.301 [2024-12-06 15:44:01.516478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:18.301 [2024-12-06 15:44:01.518470] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:18.559 138.33 IOPS, 415.00 MiB/s [2024-12-06T15:44:01.854Z] [2024-12-06 15:44:01.723233] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:18.559 [2024-12-06 15:44:01.723705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:18.817 15:44:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.817 15:44:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.817 15:44:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.817 15:44:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.817 15:44:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.817 15:44:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.817 15:44:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.817 15:44:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.817 15:44:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.817 15:44:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.817 15:44:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.817 "name": "raid_bdev1", 00:17:18.817 "uuid": "287c6485-5b75-402a-94ca-9d9d435fb01f", 00:17:18.817 "strip_size_kb": 0, 00:17:18.817 "state": "online", 00:17:18.817 "raid_level": "raid1", 00:17:18.817 "superblock": false, 00:17:18.817 "num_base_bdevs": 4, 00:17:18.817 "num_base_bdevs_discovered": 4, 00:17:18.817 "num_base_bdevs_operational": 4, 00:17:18.817 "process": { 00:17:18.817 "type": "rebuild", 00:17:18.817 "target": "spare", 00:17:18.817 "progress": { 00:17:18.817 "blocks": 12288, 00:17:18.817 "percent": 18 00:17:18.817 } 00:17:18.817 }, 00:17:18.817 "base_bdevs_list": [ 00:17:18.817 { 00:17:18.817 "name": "spare", 00:17:18.817 "uuid": "3935c0e3-552e-5ead-9b2d-b72d88d36351", 00:17:18.817 "is_configured": true, 00:17:18.817 "data_offset": 0, 00:17:18.817 "data_size": 65536 00:17:18.817 }, 00:17:18.817 { 00:17:18.817 "name": "BaseBdev2", 00:17:18.817 "uuid": "2a2da0e4-6891-5f1d-a6f7-86670d49f145", 00:17:18.817 "is_configured": true, 00:17:18.817 "data_offset": 0, 00:17:18.817 "data_size": 65536 00:17:18.817 }, 00:17:18.817 { 00:17:18.817 "name": "BaseBdev3", 00:17:18.817 "uuid": "ec695b09-745d-5fc8-8b7f-5c6c9d666dc0", 00:17:18.817 "is_configured": true, 00:17:18.817 "data_offset": 0, 00:17:18.817 "data_size": 65536 00:17:18.817 }, 00:17:18.817 { 00:17:18.817 "name": "BaseBdev4", 00:17:18.817 "uuid": "75a30f6d-a83a-5ddd-a996-95bf0d8fa3fa", 00:17:18.817 "is_configured": true, 00:17:18.817 "data_offset": 0, 00:17:18.817 "data_size": 65536 00:17:18.817 } 00:17:18.817 ] 00:17:18.817 }' 00:17:18.817 15:44:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.817 [2024-12-06 15:44:01.986401] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:18.817 15:44:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.817 [2024-12-06 15:44:01.988432] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:18.817 15:44:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.817 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.817 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:18.817 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:18.817 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:18.817 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:18.817 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:18.817 15:44:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.817 15:44:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.817 [2024-12-06 15:44:02.040073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:19.076 [2024-12-06 15:44:02.191174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:19.076 [2024-12-06 15:44:02.215194] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:19.076 [2024-12-06 15:44:02.215232] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.076 "name": "raid_bdev1", 00:17:19.076 "uuid": "287c6485-5b75-402a-94ca-9d9d435fb01f", 00:17:19.076 "strip_size_kb": 0, 00:17:19.076 "state": "online", 00:17:19.076 "raid_level": "raid1", 00:17:19.076 "superblock": false, 00:17:19.076 "num_base_bdevs": 4, 00:17:19.076 "num_base_bdevs_discovered": 3, 00:17:19.076 "num_base_bdevs_operational": 3, 00:17:19.076 "process": { 00:17:19.076 "type": "rebuild", 00:17:19.076 "target": "spare", 00:17:19.076 "progress": { 00:17:19.076 "blocks": 16384, 00:17:19.076 "percent": 25 00:17:19.076 } 00:17:19.076 }, 00:17:19.076 "base_bdevs_list": [ 00:17:19.076 { 00:17:19.076 "name": "spare", 00:17:19.076 "uuid": "3935c0e3-552e-5ead-9b2d-b72d88d36351", 00:17:19.076 "is_configured": true, 00:17:19.076 "data_offset": 0, 00:17:19.076 "data_size": 65536 00:17:19.076 }, 00:17:19.076 { 00:17:19.076 "name": null, 00:17:19.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.076 "is_configured": false, 00:17:19.076 "data_offset": 0, 00:17:19.076 "data_size": 65536 00:17:19.076 }, 00:17:19.076 { 00:17:19.076 "name": "BaseBdev3", 00:17:19.076 "uuid": "ec695b09-745d-5fc8-8b7f-5c6c9d666dc0", 00:17:19.076 "is_configured": true, 00:17:19.076 "data_offset": 0, 00:17:19.076 "data_size": 65536 00:17:19.076 }, 00:17:19.076 { 00:17:19.076 "name": "BaseBdev4", 00:17:19.076 "uuid": "75a30f6d-a83a-5ddd-a996-95bf0d8fa3fa", 00:17:19.076 "is_configured": true, 00:17:19.076 "data_offset": 0, 00:17:19.076 "data_size": 65536 00:17:19.076 } 00:17:19.076 ] 00:17:19.076 }' 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=490 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.076 15:44:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.334 15:44:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.334 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.334 "name": "raid_bdev1", 00:17:19.334 "uuid": "287c6485-5b75-402a-94ca-9d9d435fb01f", 00:17:19.334 "strip_size_kb": 0, 00:17:19.334 "state": "online", 00:17:19.334 "raid_level": "raid1", 00:17:19.334 "superblock": false, 00:17:19.334 "num_base_bdevs": 4, 00:17:19.334 "num_base_bdevs_discovered": 3, 00:17:19.334 "num_base_bdevs_operational": 3, 00:17:19.334 "process": { 00:17:19.334 "type": "rebuild", 00:17:19.334 "target": "spare", 00:17:19.334 "progress": { 00:17:19.334 "blocks": 18432, 00:17:19.334 "percent": 28 00:17:19.334 } 00:17:19.334 }, 00:17:19.334 "base_bdevs_list": [ 00:17:19.334 { 00:17:19.334 "name": "spare", 00:17:19.334 "uuid": "3935c0e3-552e-5ead-9b2d-b72d88d36351", 00:17:19.334 "is_configured": true, 00:17:19.334 "data_offset": 0, 00:17:19.334 "data_size": 65536 00:17:19.334 }, 00:17:19.334 { 00:17:19.334 "name": null, 00:17:19.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.334 "is_configured": false, 00:17:19.334 "data_offset": 0, 00:17:19.334 "data_size": 65536 00:17:19.334 }, 00:17:19.334 { 00:17:19.334 "name": "BaseBdev3", 00:17:19.334 "uuid": "ec695b09-745d-5fc8-8b7f-5c6c9d666dc0", 00:17:19.334 "is_configured": true, 00:17:19.334 "data_offset": 0, 00:17:19.334 "data_size": 65536 00:17:19.334 }, 00:17:19.334 { 00:17:19.334 "name": "BaseBdev4", 00:17:19.334 "uuid": "75a30f6d-a83a-5ddd-a996-95bf0d8fa3fa", 00:17:19.334 "is_configured": true, 00:17:19.334 "data_offset": 0, 00:17:19.334 "data_size": 65536 00:17:19.334 } 00:17:19.334 ] 00:17:19.334 }' 00:17:19.334 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.334 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.334 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.334 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.334 15:44:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:19.334 [2024-12-06 15:44:02.469361] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:19.334 [2024-12-06 15:44:02.470692] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:19.591 121.00 IOPS, 363.00 MiB/s [2024-12-06T15:44:02.886Z] [2024-12-06 15:44:02.697745] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:20.158 [2024-12-06 15:44:03.154642] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:20.417 15:44:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.417 15:44:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.417 15:44:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.417 15:44:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.417 15:44:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.417 15:44:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.417 [2024-12-06 15:44:03.472777] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:20.417 [2024-12-06 15:44:03.473427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:20.417 15:44:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.417 15:44:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.417 15:44:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.417 15:44:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.417 15:44:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.417 15:44:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.417 "name": "raid_bdev1", 00:17:20.417 "uuid": "287c6485-5b75-402a-94ca-9d9d435fb01f", 00:17:20.417 "strip_size_kb": 0, 00:17:20.417 "state": "online", 00:17:20.417 "raid_level": "raid1", 00:17:20.418 "superblock": false, 00:17:20.418 "num_base_bdevs": 4, 00:17:20.418 "num_base_bdevs_discovered": 3, 00:17:20.418 "num_base_bdevs_operational": 3, 00:17:20.418 "process": { 00:17:20.418 "type": "rebuild", 00:17:20.418 "target": "spare", 00:17:20.418 "progress": { 00:17:20.418 "blocks": 32768, 00:17:20.418 "percent": 50 00:17:20.418 } 00:17:20.418 }, 00:17:20.418 "base_bdevs_list": [ 00:17:20.418 { 00:17:20.418 "name": "spare", 00:17:20.418 "uuid": "3935c0e3-552e-5ead-9b2d-b72d88d36351", 00:17:20.418 "is_configured": true, 00:17:20.418 "data_offset": 0, 00:17:20.418 "data_size": 65536 00:17:20.418 }, 00:17:20.418 { 00:17:20.418 "name": null, 00:17:20.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.418 "is_configured": false, 00:17:20.418 "data_offset": 0, 00:17:20.418 "data_size": 65536 00:17:20.418 }, 00:17:20.418 { 00:17:20.418 "name": "BaseBdev3", 00:17:20.418 "uuid": "ec695b09-745d-5fc8-8b7f-5c6c9d666dc0", 00:17:20.418 "is_configured": true, 00:17:20.418 "data_offset": 0, 00:17:20.418 "data_size": 65536 00:17:20.418 }, 00:17:20.418 { 00:17:20.418 "name": "BaseBdev4", 00:17:20.418 "uuid": "75a30f6d-a83a-5ddd-a996-95bf0d8fa3fa", 00:17:20.418 "is_configured": true, 00:17:20.418 "data_offset": 0, 00:17:20.418 "data_size": 65536 00:17:20.418 } 00:17:20.418 ] 00:17:20.418 }' 00:17:20.418 15:44:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.418 15:44:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.418 15:44:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.418 107.40 IOPS, 322.20 MiB/s [2024-12-06T15:44:03.713Z] 15:44:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.418 15:44:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.418 [2024-12-06 15:44:03.681964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:20.418 [2024-12-06 15:44:03.682410] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:20.985 [2024-12-06 15:44:04.149424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:21.243 [2024-12-06 15:44:04.385416] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:21.502 96.00 IOPS, 288.00 MiB/s [2024-12-06T15:44:04.797Z] 15:44:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.502 15:44:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.502 15:44:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.502 15:44:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.502 15:44:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.502 15:44:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.502 15:44:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.502 15:44:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.502 15:44:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.502 15:44:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.502 15:44:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.502 15:44:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.502 "name": "raid_bdev1", 00:17:21.502 "uuid": "287c6485-5b75-402a-94ca-9d9d435fb01f", 00:17:21.502 "strip_size_kb": 0, 00:17:21.502 "state": "online", 00:17:21.502 "raid_level": "raid1", 00:17:21.502 "superblock": false, 00:17:21.502 "num_base_bdevs": 4, 00:17:21.502 "num_base_bdevs_discovered": 3, 00:17:21.502 "num_base_bdevs_operational": 3, 00:17:21.502 "process": { 00:17:21.502 "type": "rebuild", 00:17:21.502 "target": "spare", 00:17:21.502 "progress": { 00:17:21.502 "blocks": 49152, 00:17:21.502 "percent": 75 00:17:21.502 } 00:17:21.502 }, 00:17:21.502 "base_bdevs_list": [ 00:17:21.502 { 00:17:21.502 "name": "spare", 00:17:21.502 "uuid": "3935c0e3-552e-5ead-9b2d-b72d88d36351", 00:17:21.502 "is_configured": true, 00:17:21.502 "data_offset": 0, 00:17:21.502 "data_size": 65536 00:17:21.502 }, 00:17:21.502 { 00:17:21.502 "name": null, 00:17:21.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.502 "is_configured": false, 00:17:21.502 "data_offset": 0, 00:17:21.502 "data_size": 65536 00:17:21.502 }, 00:17:21.502 { 00:17:21.502 "name": "BaseBdev3", 00:17:21.502 "uuid": "ec695b09-745d-5fc8-8b7f-5c6c9d666dc0", 00:17:21.502 "is_configured": true, 00:17:21.502 "data_offset": 0, 00:17:21.502 "data_size": 65536 00:17:21.502 }, 00:17:21.502 { 00:17:21.502 "name": "BaseBdev4", 00:17:21.502 "uuid": "75a30f6d-a83a-5ddd-a996-95bf0d8fa3fa", 00:17:21.503 "is_configured": true, 00:17:21.503 "data_offset": 0, 00:17:21.503 "data_size": 65536 00:17:21.503 } 00:17:21.503 ] 00:17:21.503 }' 00:17:21.503 15:44:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.503 15:44:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.503 15:44:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.503 15:44:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.503 15:44:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.068 [2024-12-06 15:44:05.055605] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:17:22.326 [2024-12-06 15:44:05.502267] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:22.326 87.71 IOPS, 263.14 MiB/s [2024-12-06T15:44:05.621Z] [2024-12-06 15:44:05.602108] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:22.326 [2024-12-06 15:44:05.611430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.585 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.585 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.585 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.585 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.585 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.585 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.585 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.585 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.585 15:44:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.585 15:44:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:22.585 15:44:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.585 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.585 "name": "raid_bdev1", 00:17:22.585 "uuid": "287c6485-5b75-402a-94ca-9d9d435fb01f", 00:17:22.585 "strip_size_kb": 0, 00:17:22.585 "state": "online", 00:17:22.585 "raid_level": "raid1", 00:17:22.585 "superblock": false, 00:17:22.585 "num_base_bdevs": 4, 00:17:22.585 "num_base_bdevs_discovered": 3, 00:17:22.585 "num_base_bdevs_operational": 3, 00:17:22.585 "base_bdevs_list": [ 00:17:22.585 { 00:17:22.585 "name": "spare", 00:17:22.585 "uuid": "3935c0e3-552e-5ead-9b2d-b72d88d36351", 00:17:22.585 "is_configured": true, 00:17:22.585 "data_offset": 0, 00:17:22.585 "data_size": 65536 00:17:22.585 }, 00:17:22.585 { 00:17:22.585 "name": null, 00:17:22.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.585 "is_configured": false, 00:17:22.585 "data_offset": 0, 00:17:22.585 "data_size": 65536 00:17:22.585 }, 00:17:22.585 { 00:17:22.585 "name": "BaseBdev3", 00:17:22.585 "uuid": "ec695b09-745d-5fc8-8b7f-5c6c9d666dc0", 00:17:22.585 "is_configured": true, 00:17:22.585 "data_offset": 0, 00:17:22.585 "data_size": 65536 00:17:22.585 }, 00:17:22.585 { 00:17:22.585 "name": "BaseBdev4", 00:17:22.585 "uuid": "75a30f6d-a83a-5ddd-a996-95bf0d8fa3fa", 00:17:22.585 "is_configured": true, 00:17:22.585 "data_offset": 0, 00:17:22.585 "data_size": 65536 00:17:22.585 } 00:17:22.585 ] 00:17:22.585 }' 00:17:22.585 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.585 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:22.585 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.844 "name": "raid_bdev1", 00:17:22.844 "uuid": "287c6485-5b75-402a-94ca-9d9d435fb01f", 00:17:22.844 "strip_size_kb": 0, 00:17:22.844 "state": "online", 00:17:22.844 "raid_level": "raid1", 00:17:22.844 "superblock": false, 00:17:22.844 "num_base_bdevs": 4, 00:17:22.844 "num_base_bdevs_discovered": 3, 00:17:22.844 "num_base_bdevs_operational": 3, 00:17:22.844 "base_bdevs_list": [ 00:17:22.844 { 00:17:22.844 "name": "spare", 00:17:22.844 "uuid": "3935c0e3-552e-5ead-9b2d-b72d88d36351", 00:17:22.844 "is_configured": true, 00:17:22.844 "data_offset": 0, 00:17:22.844 "data_size": 65536 00:17:22.844 }, 00:17:22.844 { 00:17:22.844 "name": null, 00:17:22.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.844 "is_configured": false, 00:17:22.844 "data_offset": 0, 00:17:22.844 "data_size": 65536 00:17:22.844 }, 00:17:22.844 { 00:17:22.844 "name": "BaseBdev3", 00:17:22.844 "uuid": "ec695b09-745d-5fc8-8b7f-5c6c9d666dc0", 00:17:22.844 "is_configured": true, 00:17:22.844 "data_offset": 0, 00:17:22.844 "data_size": 65536 00:17:22.844 }, 00:17:22.844 { 00:17:22.844 "name": "BaseBdev4", 00:17:22.844 "uuid": "75a30f6d-a83a-5ddd-a996-95bf0d8fa3fa", 00:17:22.844 "is_configured": true, 00:17:22.844 "data_offset": 0, 00:17:22.844 "data_size": 65536 00:17:22.844 } 00:17:22.844 ] 00:17:22.844 }' 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.844 15:44:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:22.844 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.844 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.844 "name": "raid_bdev1", 00:17:22.844 "uuid": "287c6485-5b75-402a-94ca-9d9d435fb01f", 00:17:22.844 "strip_size_kb": 0, 00:17:22.844 "state": "online", 00:17:22.844 "raid_level": "raid1", 00:17:22.844 "superblock": false, 00:17:22.844 "num_base_bdevs": 4, 00:17:22.844 "num_base_bdevs_discovered": 3, 00:17:22.844 "num_base_bdevs_operational": 3, 00:17:22.844 "base_bdevs_list": [ 00:17:22.844 { 00:17:22.844 "name": "spare", 00:17:22.844 "uuid": "3935c0e3-552e-5ead-9b2d-b72d88d36351", 00:17:22.844 "is_configured": true, 00:17:22.844 "data_offset": 0, 00:17:22.844 "data_size": 65536 00:17:22.844 }, 00:17:22.844 { 00:17:22.844 "name": null, 00:17:22.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.844 "is_configured": false, 00:17:22.844 "data_offset": 0, 00:17:22.844 "data_size": 65536 00:17:22.844 }, 00:17:22.844 { 00:17:22.844 "name": "BaseBdev3", 00:17:22.844 "uuid": "ec695b09-745d-5fc8-8b7f-5c6c9d666dc0", 00:17:22.844 "is_configured": true, 00:17:22.844 "data_offset": 0, 00:17:22.844 "data_size": 65536 00:17:22.844 }, 00:17:22.844 { 00:17:22.844 "name": "BaseBdev4", 00:17:22.844 "uuid": "75a30f6d-a83a-5ddd-a996-95bf0d8fa3fa", 00:17:22.844 "is_configured": true, 00:17:22.844 "data_offset": 0, 00:17:22.844 "data_size": 65536 00:17:22.844 } 00:17:22.844 ] 00:17:22.844 }' 00:17:22.844 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.844 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.102 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:23.102 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.102 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.102 [2024-12-06 15:44:06.388519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.102 [2024-12-06 15:44:06.388555] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.359 00:17:23.359 Latency(us) 00:17:23.359 [2024-12-06T15:44:06.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.360 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:23.360 raid_bdev1 : 7.91 81.56 244.68 0.00 0.00 16035.18 305.97 114543.24 00:17:23.360 [2024-12-06T15:44:06.655Z] =================================================================================================================== 00:17:23.360 [2024-12-06T15:44:06.655Z] Total : 81.56 244.68 0.00 0.00 16035.18 305.97 114543.24 00:17:23.360 [2024-12-06 15:44:06.494117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.360 { 00:17:23.360 "results": [ 00:17:23.360 { 00:17:23.360 "job": "raid_bdev1", 00:17:23.360 "core_mask": "0x1", 00:17:23.360 "workload": "randrw", 00:17:23.360 "percentage": 50, 00:17:23.360 "status": "finished", 00:17:23.360 "queue_depth": 2, 00:17:23.360 "io_size": 3145728, 00:17:23.360 "runtime": 7.908381, 00:17:23.360 "iops": 81.55904476529393, 00:17:23.360 "mibps": 244.67713429588179, 00:17:23.360 "io_failed": 0, 00:17:23.360 "io_timeout": 0, 00:17:23.360 "avg_latency_us": 16035.180095264777, 00:17:23.360 "min_latency_us": 305.96626506024097, 00:17:23.360 "max_latency_us": 114543.24176706828 00:17:23.360 } 00:17:23.360 ], 00:17:23.360 "core_count": 1 00:17:23.360 } 00:17:23.360 [2024-12-06 15:44:06.494330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.360 [2024-12-06 15:44:06.494456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.360 [2024-12-06 15:44:06.494471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.360 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:23.619 /dev/nbd0 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.619 1+0 records in 00:17:23.619 1+0 records out 00:17:23.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409684 s, 10.0 MB/s 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.619 15:44:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:23.879 /dev/nbd1 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.879 1+0 records in 00:17:23.879 1+0 records out 00:17:23.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381681 s, 10.7 MB/s 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.879 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:24.138 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:24.138 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.138 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:24.138 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:24.138 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:24.138 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.138 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:24.397 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:24.656 /dev/nbd1 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.656 1+0 records in 00:17:24.656 1+0 records out 00:17:24.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350399 s, 11.7 MB/s 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.656 15:44:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:24.916 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:24.916 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:24.916 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:24.916 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.916 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.916 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:24.916 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:24.916 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.916 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:24.916 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.916 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:24.916 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:24.916 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:24.916 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.916 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78777 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78777 ']' 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78777 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78777 00:17:25.176 killing process with pid 78777 00:17:25.176 Received shutdown signal, test time was about 9.867401 seconds 00:17:25.176 00:17:25.176 Latency(us) 00:17:25.176 [2024-12-06T15:44:08.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.176 [2024-12-06T15:44:08.471Z] =================================================================================================================== 00:17:25.176 [2024-12-06T15:44:08.471Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78777' 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78777 00:17:25.176 [2024-12-06 15:44:08.428166] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:25.176 15:44:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78777 00:17:25.744 [2024-12-06 15:44:08.879372] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:27.178 00:17:27.178 real 0m13.479s 00:17:27.178 user 0m16.541s 00:17:27.178 sys 0m2.183s 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.178 ************************************ 00:17:27.178 END TEST raid_rebuild_test_io 00:17:27.178 ************************************ 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.178 15:44:10 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:17:27.178 15:44:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:27.178 15:44:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.178 15:44:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.178 ************************************ 00:17:27.178 START TEST raid_rebuild_test_sb_io 00:17:27.178 ************************************ 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79188 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79188 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79188 ']' 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.178 15:44:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.178 [2024-12-06 15:44:10.370890] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:17:27.178 [2024-12-06 15:44:10.371230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:27.178 Zero copy mechanism will not be used. 00:17:27.178 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79188 ] 00:17:27.437 [2024-12-06 15:44:10.557387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.437 [2024-12-06 15:44:10.688088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.695 [2024-12-06 15:44:10.931450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.695 [2024-12-06 15:44:10.931524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.952 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.952 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:17:27.952 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:27.952 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:27.952 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.952 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.211 BaseBdev1_malloc 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.211 [2024-12-06 15:44:11.249107] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:28.211 [2024-12-06 15:44:11.249181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.211 [2024-12-06 15:44:11.249208] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:28.211 [2024-12-06 15:44:11.249224] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.211 [2024-12-06 15:44:11.251920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.211 [2024-12-06 15:44:11.252085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:28.211 BaseBdev1 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.211 BaseBdev2_malloc 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.211 [2024-12-06 15:44:11.307812] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:28.211 [2024-12-06 15:44:11.307880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.211 [2024-12-06 15:44:11.307907] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:28.211 [2024-12-06 15:44:11.307922] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.211 [2024-12-06 15:44:11.310744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.211 [2024-12-06 15:44:11.310894] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:28.211 BaseBdev2 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.211 BaseBdev3_malloc 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.211 [2024-12-06 15:44:11.382987] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:28.211 [2024-12-06 15:44:11.383045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.211 [2024-12-06 15:44:11.383071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:28.211 [2024-12-06 15:44:11.383087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.211 [2024-12-06 15:44:11.385763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.211 [2024-12-06 15:44:11.385815] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:28.211 BaseBdev3 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.211 BaseBdev4_malloc 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.211 [2024-12-06 15:44:11.445105] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:28.211 [2024-12-06 15:44:11.445278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.211 [2024-12-06 15:44:11.445311] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:28.211 [2024-12-06 15:44:11.445327] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.211 [2024-12-06 15:44:11.447958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.211 [2024-12-06 15:44:11.448002] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:28.211 BaseBdev4 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.211 spare_malloc 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.211 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.471 spare_delay 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.471 [2024-12-06 15:44:11.519420] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:28.471 [2024-12-06 15:44:11.519613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.471 [2024-12-06 15:44:11.519642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:28.471 [2024-12-06 15:44:11.519658] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.471 [2024-12-06 15:44:11.522344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.471 [2024-12-06 15:44:11.522386] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:28.471 spare 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.471 [2024-12-06 15:44:11.531467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.471 [2024-12-06 15:44:11.534031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.471 [2024-12-06 15:44:11.534124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:28.471 [2024-12-06 15:44:11.534182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:28.471 [2024-12-06 15:44:11.534385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:28.471 [2024-12-06 15:44:11.534403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:28.471 [2024-12-06 15:44:11.534705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:28.471 [2024-12-06 15:44:11.534915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:28.471 [2024-12-06 15:44:11.534973] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:28.471 [2024-12-06 15:44:11.535141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.471 "name": "raid_bdev1", 00:17:28.471 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:28.471 "strip_size_kb": 0, 00:17:28.471 "state": "online", 00:17:28.471 "raid_level": "raid1", 00:17:28.471 "superblock": true, 00:17:28.471 "num_base_bdevs": 4, 00:17:28.471 "num_base_bdevs_discovered": 4, 00:17:28.471 "num_base_bdevs_operational": 4, 00:17:28.471 "base_bdevs_list": [ 00:17:28.471 { 00:17:28.471 "name": "BaseBdev1", 00:17:28.471 "uuid": "a1ffd94b-714e-5893-a670-e13fbeb58fb9", 00:17:28.471 "is_configured": true, 00:17:28.471 "data_offset": 2048, 00:17:28.471 "data_size": 63488 00:17:28.471 }, 00:17:28.471 { 00:17:28.471 "name": "BaseBdev2", 00:17:28.471 "uuid": "dfa96608-ad5c-5305-b42b-7720e185e0c6", 00:17:28.471 "is_configured": true, 00:17:28.471 "data_offset": 2048, 00:17:28.471 "data_size": 63488 00:17:28.471 }, 00:17:28.471 { 00:17:28.471 "name": "BaseBdev3", 00:17:28.471 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:28.471 "is_configured": true, 00:17:28.471 "data_offset": 2048, 00:17:28.471 "data_size": 63488 00:17:28.471 }, 00:17:28.471 { 00:17:28.471 "name": "BaseBdev4", 00:17:28.471 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:28.471 "is_configured": true, 00:17:28.471 "data_offset": 2048, 00:17:28.471 "data_size": 63488 00:17:28.471 } 00:17:28.471 ] 00:17:28.471 }' 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.471 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.730 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:28.731 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.731 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.731 15:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:28.731 [2024-12-06 15:44:11.975134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.731 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.731 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:28.731 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.731 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:28.731 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.731 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.990 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.990 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.991 [2024-12-06 15:44:12.062652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.991 "name": "raid_bdev1", 00:17:28.991 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:28.991 "strip_size_kb": 0, 00:17:28.991 "state": "online", 00:17:28.991 "raid_level": "raid1", 00:17:28.991 "superblock": true, 00:17:28.991 "num_base_bdevs": 4, 00:17:28.991 "num_base_bdevs_discovered": 3, 00:17:28.991 "num_base_bdevs_operational": 3, 00:17:28.991 "base_bdevs_list": [ 00:17:28.991 { 00:17:28.991 "name": null, 00:17:28.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.991 "is_configured": false, 00:17:28.991 "data_offset": 0, 00:17:28.991 "data_size": 63488 00:17:28.991 }, 00:17:28.991 { 00:17:28.991 "name": "BaseBdev2", 00:17:28.991 "uuid": "dfa96608-ad5c-5305-b42b-7720e185e0c6", 00:17:28.991 "is_configured": true, 00:17:28.991 "data_offset": 2048, 00:17:28.991 "data_size": 63488 00:17:28.991 }, 00:17:28.991 { 00:17:28.991 "name": "BaseBdev3", 00:17:28.991 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:28.991 "is_configured": true, 00:17:28.991 "data_offset": 2048, 00:17:28.991 "data_size": 63488 00:17:28.991 }, 00:17:28.991 { 00:17:28.991 "name": "BaseBdev4", 00:17:28.991 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:28.991 "is_configured": true, 00:17:28.991 "data_offset": 2048, 00:17:28.991 "data_size": 63488 00:17:28.991 } 00:17:28.991 ] 00:17:28.991 }' 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.991 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.991 [2024-12-06 15:44:12.163524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:28.991 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:28.991 Zero copy mechanism will not be used. 00:17:28.991 Running I/O for 60 seconds... 00:17:29.251 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:29.251 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.251 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.251 [2024-12-06 15:44:12.519538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:29.511 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.511 15:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:29.511 [2024-12-06 15:44:12.590105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:29.511 [2024-12-06 15:44:12.592711] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:29.511 [2024-12-06 15:44:12.705356] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:29.511 [2024-12-06 15:44:12.707500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:29.770 [2024-12-06 15:44:12.912847] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:29.770 [2024-12-06 15:44:12.913315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:30.030 117.00 IOPS, 351.00 MiB/s [2024-12-06T15:44:13.325Z] [2024-12-06 15:44:13.246373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:30.289 [2024-12-06 15:44:13.356260] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:30.289 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.289 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.289 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.289 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.289 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.289 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.289 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.289 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.289 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.548 [2024-12-06 15:44:13.594987] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:30.548 [2024-12-06 15:44:13.597133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:30.548 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.548 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.548 "name": "raid_bdev1", 00:17:30.548 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:30.548 "strip_size_kb": 0, 00:17:30.548 "state": "online", 00:17:30.548 "raid_level": "raid1", 00:17:30.548 "superblock": true, 00:17:30.548 "num_base_bdevs": 4, 00:17:30.548 "num_base_bdevs_discovered": 4, 00:17:30.548 "num_base_bdevs_operational": 4, 00:17:30.548 "process": { 00:17:30.548 "type": "rebuild", 00:17:30.548 "target": "spare", 00:17:30.548 "progress": { 00:17:30.548 "blocks": 12288, 00:17:30.548 "percent": 19 00:17:30.548 } 00:17:30.548 }, 00:17:30.548 "base_bdevs_list": [ 00:17:30.548 { 00:17:30.548 "name": "spare", 00:17:30.548 "uuid": "40600fd0-3f2c-5a18-a96d-e0a89fff583f", 00:17:30.548 "is_configured": true, 00:17:30.548 "data_offset": 2048, 00:17:30.548 "data_size": 63488 00:17:30.548 }, 00:17:30.548 { 00:17:30.548 "name": "BaseBdev2", 00:17:30.548 "uuid": "dfa96608-ad5c-5305-b42b-7720e185e0c6", 00:17:30.548 "is_configured": true, 00:17:30.548 "data_offset": 2048, 00:17:30.548 "data_size": 63488 00:17:30.548 }, 00:17:30.548 { 00:17:30.548 "name": "BaseBdev3", 00:17:30.548 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:30.548 "is_configured": true, 00:17:30.548 "data_offset": 2048, 00:17:30.548 "data_size": 63488 00:17:30.548 }, 00:17:30.548 { 00:17:30.548 "name": "BaseBdev4", 00:17:30.548 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:30.548 "is_configured": true, 00:17:30.548 "data_offset": 2048, 00:17:30.548 "data_size": 63488 00:17:30.548 } 00:17:30.548 ] 00:17:30.548 }' 00:17:30.548 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.548 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.548 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.548 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.548 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:30.548 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.548 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.548 [2024-12-06 15:44:13.709345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.548 [2024-12-06 15:44:13.817984] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:30.548 [2024-12-06 15:44:13.828112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.548 [2024-12-06 15:44:13.828268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.548 [2024-12-06 15:44:13.828319] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:30.807 [2024-12-06 15:44:13.853658] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:30.807 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.807 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.808 "name": "raid_bdev1", 00:17:30.808 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:30.808 "strip_size_kb": 0, 00:17:30.808 "state": "online", 00:17:30.808 "raid_level": "raid1", 00:17:30.808 "superblock": true, 00:17:30.808 "num_base_bdevs": 4, 00:17:30.808 "num_base_bdevs_discovered": 3, 00:17:30.808 "num_base_bdevs_operational": 3, 00:17:30.808 "base_bdevs_list": [ 00:17:30.808 { 00:17:30.808 "name": null, 00:17:30.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.808 "is_configured": false, 00:17:30.808 "data_offset": 0, 00:17:30.808 "data_size": 63488 00:17:30.808 }, 00:17:30.808 { 00:17:30.808 "name": "BaseBdev2", 00:17:30.808 "uuid": "dfa96608-ad5c-5305-b42b-7720e185e0c6", 00:17:30.808 "is_configured": true, 00:17:30.808 "data_offset": 2048, 00:17:30.808 "data_size": 63488 00:17:30.808 }, 00:17:30.808 { 00:17:30.808 "name": "BaseBdev3", 00:17:30.808 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:30.808 "is_configured": true, 00:17:30.808 "data_offset": 2048, 00:17:30.808 "data_size": 63488 00:17:30.808 }, 00:17:30.808 { 00:17:30.808 "name": "BaseBdev4", 00:17:30.808 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:30.808 "is_configured": true, 00:17:30.808 "data_offset": 2048, 00:17:30.808 "data_size": 63488 00:17:30.808 } 00:17:30.808 ] 00:17:30.808 }' 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.808 15:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:31.068 128.50 IOPS, 385.50 MiB/s [2024-12-06T15:44:14.363Z] 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.068 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.068 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.068 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.068 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.068 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.068 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.068 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.068 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:31.068 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.068 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.068 "name": "raid_bdev1", 00:17:31.068 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:31.068 "strip_size_kb": 0, 00:17:31.068 "state": "online", 00:17:31.068 "raid_level": "raid1", 00:17:31.068 "superblock": true, 00:17:31.068 "num_base_bdevs": 4, 00:17:31.068 "num_base_bdevs_discovered": 3, 00:17:31.068 "num_base_bdevs_operational": 3, 00:17:31.068 "base_bdevs_list": [ 00:17:31.068 { 00:17:31.068 "name": null, 00:17:31.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.068 "is_configured": false, 00:17:31.068 "data_offset": 0, 00:17:31.068 "data_size": 63488 00:17:31.068 }, 00:17:31.068 { 00:17:31.068 "name": "BaseBdev2", 00:17:31.068 "uuid": "dfa96608-ad5c-5305-b42b-7720e185e0c6", 00:17:31.068 "is_configured": true, 00:17:31.068 "data_offset": 2048, 00:17:31.068 "data_size": 63488 00:17:31.068 }, 00:17:31.068 { 00:17:31.068 "name": "BaseBdev3", 00:17:31.068 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:31.068 "is_configured": true, 00:17:31.068 "data_offset": 2048, 00:17:31.068 "data_size": 63488 00:17:31.068 }, 00:17:31.068 { 00:17:31.068 "name": "BaseBdev4", 00:17:31.068 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:31.068 "is_configured": true, 00:17:31.068 "data_offset": 2048, 00:17:31.068 "data_size": 63488 00:17:31.068 } 00:17:31.068 ] 00:17:31.068 }' 00:17:31.068 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.068 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.068 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.068 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:31.328 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:31.328 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.328 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:31.328 [2024-12-06 15:44:14.374090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:31.328 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.328 15:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:31.328 [2024-12-06 15:44:14.441446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:31.328 [2024-12-06 15:44:14.444111] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:31.328 [2024-12-06 15:44:14.556467] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:31.328 [2024-12-06 15:44:14.557126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:31.588 [2024-12-06 15:44:14.688216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:31.588 [2024-12-06 15:44:14.689414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:31.847 [2024-12-06 15:44:15.042916] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:31.847 [2024-12-06 15:44:15.043901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:32.105 119.67 IOPS, 359.00 MiB/s [2024-12-06T15:44:15.400Z] [2024-12-06 15:44:15.248237] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:32.105 [2024-12-06 15:44:15.248836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.363 "name": "raid_bdev1", 00:17:32.363 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:32.363 "strip_size_kb": 0, 00:17:32.363 "state": "online", 00:17:32.363 "raid_level": "raid1", 00:17:32.363 "superblock": true, 00:17:32.363 "num_base_bdevs": 4, 00:17:32.363 "num_base_bdevs_discovered": 4, 00:17:32.363 "num_base_bdevs_operational": 4, 00:17:32.363 "process": { 00:17:32.363 "type": "rebuild", 00:17:32.363 "target": "spare", 00:17:32.363 "progress": { 00:17:32.363 "blocks": 12288, 00:17:32.363 "percent": 19 00:17:32.363 } 00:17:32.363 }, 00:17:32.363 "base_bdevs_list": [ 00:17:32.363 { 00:17:32.363 "name": "spare", 00:17:32.363 "uuid": "40600fd0-3f2c-5a18-a96d-e0a89fff583f", 00:17:32.363 "is_configured": true, 00:17:32.363 "data_offset": 2048, 00:17:32.363 "data_size": 63488 00:17:32.363 }, 00:17:32.363 { 00:17:32.363 "name": "BaseBdev2", 00:17:32.363 "uuid": "dfa96608-ad5c-5305-b42b-7720e185e0c6", 00:17:32.363 "is_configured": true, 00:17:32.363 "data_offset": 2048, 00:17:32.363 "data_size": 63488 00:17:32.363 }, 00:17:32.363 { 00:17:32.363 "name": "BaseBdev3", 00:17:32.363 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:32.363 "is_configured": true, 00:17:32.363 "data_offset": 2048, 00:17:32.363 "data_size": 63488 00:17:32.363 }, 00:17:32.363 { 00:17:32.363 "name": "BaseBdev4", 00:17:32.363 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:32.363 "is_configured": true, 00:17:32.363 "data_offset": 2048, 00:17:32.363 "data_size": 63488 00:17:32.363 } 00:17:32.363 ] 00:17:32.363 }' 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.363 [2024-12-06 15:44:15.490378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:32.363 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.363 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.363 [2024-12-06 15:44:15.573697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:32.623 [2024-12-06 15:44:15.800080] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:32.623 [2024-12-06 15:44:15.800134] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:32.623 [2024-12-06 15:44:15.807350] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.623 "name": "raid_bdev1", 00:17:32.623 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:32.623 "strip_size_kb": 0, 00:17:32.623 "state": "online", 00:17:32.623 "raid_level": "raid1", 00:17:32.623 "superblock": true, 00:17:32.623 "num_base_bdevs": 4, 00:17:32.623 "num_base_bdevs_discovered": 3, 00:17:32.623 "num_base_bdevs_operational": 3, 00:17:32.623 "process": { 00:17:32.623 "type": "rebuild", 00:17:32.623 "target": "spare", 00:17:32.623 "progress": { 00:17:32.623 "blocks": 16384, 00:17:32.623 "percent": 25 00:17:32.623 } 00:17:32.623 }, 00:17:32.623 "base_bdevs_list": [ 00:17:32.623 { 00:17:32.623 "name": "spare", 00:17:32.623 "uuid": "40600fd0-3f2c-5a18-a96d-e0a89fff583f", 00:17:32.623 "is_configured": true, 00:17:32.623 "data_offset": 2048, 00:17:32.623 "data_size": 63488 00:17:32.623 }, 00:17:32.623 { 00:17:32.623 "name": null, 00:17:32.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.623 "is_configured": false, 00:17:32.623 "data_offset": 0, 00:17:32.623 "data_size": 63488 00:17:32.623 }, 00:17:32.623 { 00:17:32.623 "name": "BaseBdev3", 00:17:32.623 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:32.623 "is_configured": true, 00:17:32.623 "data_offset": 2048, 00:17:32.623 "data_size": 63488 00:17:32.623 }, 00:17:32.623 { 00:17:32.623 "name": "BaseBdev4", 00:17:32.623 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:32.623 "is_configured": true, 00:17:32.623 "data_offset": 2048, 00:17:32.623 "data_size": 63488 00:17:32.623 } 00:17:32.623 ] 00:17:32.623 }' 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.623 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.882 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.882 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=503 00:17:32.882 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:32.882 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.882 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.882 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.882 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.882 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.882 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.882 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.882 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.882 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.882 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.882 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.882 "name": "raid_bdev1", 00:17:32.882 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:32.882 "strip_size_kb": 0, 00:17:32.882 "state": "online", 00:17:32.882 "raid_level": "raid1", 00:17:32.882 "superblock": true, 00:17:32.882 "num_base_bdevs": 4, 00:17:32.882 "num_base_bdevs_discovered": 3, 00:17:32.882 "num_base_bdevs_operational": 3, 00:17:32.882 "process": { 00:17:32.882 "type": "rebuild", 00:17:32.882 "target": "spare", 00:17:32.882 "progress": { 00:17:32.882 "blocks": 16384, 00:17:32.882 "percent": 25 00:17:32.882 } 00:17:32.882 }, 00:17:32.882 "base_bdevs_list": [ 00:17:32.882 { 00:17:32.882 "name": "spare", 00:17:32.882 "uuid": "40600fd0-3f2c-5a18-a96d-e0a89fff583f", 00:17:32.882 "is_configured": true, 00:17:32.882 "data_offset": 2048, 00:17:32.882 "data_size": 63488 00:17:32.882 }, 00:17:32.882 { 00:17:32.882 "name": null, 00:17:32.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.882 "is_configured": false, 00:17:32.882 "data_offset": 0, 00:17:32.882 "data_size": 63488 00:17:32.882 }, 00:17:32.882 { 00:17:32.882 "name": "BaseBdev3", 00:17:32.882 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:32.882 "is_configured": true, 00:17:32.882 "data_offset": 2048, 00:17:32.882 "data_size": 63488 00:17:32.882 }, 00:17:32.882 { 00:17:32.882 "name": "BaseBdev4", 00:17:32.882 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:32.882 "is_configured": true, 00:17:32.882 "data_offset": 2048, 00:17:32.882 "data_size": 63488 00:17:32.882 } 00:17:32.882 ] 00:17:32.882 }' 00:17:32.882 15:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.882 15:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.882 15:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.882 15:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.882 15:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:32.882 [2024-12-06 15:44:16.159426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:32.882 [2024-12-06 15:44:16.160055] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:33.400 109.75 IOPS, 329.25 MiB/s [2024-12-06T15:44:16.695Z] [2024-12-06 15:44:16.590835] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:33.967 [2024-12-06 15:44:17.021024] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:33.967 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:33.967 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.967 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.967 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.967 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.968 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.968 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.968 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.968 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.968 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:33.968 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.968 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.968 "name": "raid_bdev1", 00:17:33.968 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:33.968 "strip_size_kb": 0, 00:17:33.968 "state": "online", 00:17:33.968 "raid_level": "raid1", 00:17:33.968 "superblock": true, 00:17:33.968 "num_base_bdevs": 4, 00:17:33.968 "num_base_bdevs_discovered": 3, 00:17:33.968 "num_base_bdevs_operational": 3, 00:17:33.968 "process": { 00:17:33.968 "type": "rebuild", 00:17:33.968 "target": "spare", 00:17:33.968 "progress": { 00:17:33.968 "blocks": 34816, 00:17:33.968 "percent": 54 00:17:33.968 } 00:17:33.968 }, 00:17:33.968 "base_bdevs_list": [ 00:17:33.968 { 00:17:33.968 "name": "spare", 00:17:33.968 "uuid": "40600fd0-3f2c-5a18-a96d-e0a89fff583f", 00:17:33.968 "is_configured": true, 00:17:33.968 "data_offset": 2048, 00:17:33.968 "data_size": 63488 00:17:33.968 }, 00:17:33.968 { 00:17:33.968 "name": null, 00:17:33.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.968 "is_configured": false, 00:17:33.968 "data_offset": 0, 00:17:33.968 "data_size": 63488 00:17:33.968 }, 00:17:33.968 { 00:17:33.968 "name": "BaseBdev3", 00:17:33.968 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:33.968 "is_configured": true, 00:17:33.968 "data_offset": 2048, 00:17:33.968 "data_size": 63488 00:17:33.968 }, 00:17:33.968 { 00:17:33.968 "name": "BaseBdev4", 00:17:33.968 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:33.968 "is_configured": true, 00:17:33.968 "data_offset": 2048, 00:17:33.968 "data_size": 63488 00:17:33.968 } 00:17:33.968 ] 00:17:33.968 }' 00:17:33.968 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.968 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.968 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.968 97.00 IOPS, 291.00 MiB/s [2024-12-06T15:44:17.263Z] 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.968 15:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:34.227 [2024-12-06 15:44:17.456849] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:34.515 [2024-12-06 15:44:17.781233] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:35.097 89.00 IOPS, 267.00 MiB/s [2024-12-06T15:44:18.392Z] [2024-12-06 15:44:18.219899] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.097 "name": "raid_bdev1", 00:17:35.097 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:35.097 "strip_size_kb": 0, 00:17:35.097 "state": "online", 00:17:35.097 "raid_level": "raid1", 00:17:35.097 "superblock": true, 00:17:35.097 "num_base_bdevs": 4, 00:17:35.097 "num_base_bdevs_discovered": 3, 00:17:35.097 "num_base_bdevs_operational": 3, 00:17:35.097 "process": { 00:17:35.097 "type": "rebuild", 00:17:35.097 "target": "spare", 00:17:35.097 "progress": { 00:17:35.097 "blocks": 51200, 00:17:35.097 "percent": 80 00:17:35.097 } 00:17:35.097 }, 00:17:35.097 "base_bdevs_list": [ 00:17:35.097 { 00:17:35.097 "name": "spare", 00:17:35.097 "uuid": "40600fd0-3f2c-5a18-a96d-e0a89fff583f", 00:17:35.097 "is_configured": true, 00:17:35.097 "data_offset": 2048, 00:17:35.097 "data_size": 63488 00:17:35.097 }, 00:17:35.097 { 00:17:35.097 "name": null, 00:17:35.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.097 "is_configured": false, 00:17:35.097 "data_offset": 0, 00:17:35.097 "data_size": 63488 00:17:35.097 }, 00:17:35.097 { 00:17:35.097 "name": "BaseBdev3", 00:17:35.097 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:35.097 "is_configured": true, 00:17:35.097 "data_offset": 2048, 00:17:35.097 "data_size": 63488 00:17:35.097 }, 00:17:35.097 { 00:17:35.097 "name": "BaseBdev4", 00:17:35.097 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:35.097 "is_configured": true, 00:17:35.097 "data_offset": 2048, 00:17:35.097 "data_size": 63488 00:17:35.097 } 00:17:35.097 ] 00:17:35.097 }' 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.097 15:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:35.664 [2024-12-06 15:44:18.672988] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:17:35.923 [2024-12-06 15:44:18.987195] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:35.923 [2024-12-06 15:44:19.085810] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:35.923 [2024-12-06 15:44:19.095220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.182 81.14 IOPS, 243.43 MiB/s [2024-12-06T15:44:19.477Z] 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.182 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.182 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.182 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.182 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.182 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.182 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.182 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.182 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.182 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.182 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.182 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.182 "name": "raid_bdev1", 00:17:36.182 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:36.182 "strip_size_kb": 0, 00:17:36.182 "state": "online", 00:17:36.182 "raid_level": "raid1", 00:17:36.182 "superblock": true, 00:17:36.182 "num_base_bdevs": 4, 00:17:36.182 "num_base_bdevs_discovered": 3, 00:17:36.182 "num_base_bdevs_operational": 3, 00:17:36.182 "base_bdevs_list": [ 00:17:36.182 { 00:17:36.182 "name": "spare", 00:17:36.182 "uuid": "40600fd0-3f2c-5a18-a96d-e0a89fff583f", 00:17:36.182 "is_configured": true, 00:17:36.182 "data_offset": 2048, 00:17:36.182 "data_size": 63488 00:17:36.182 }, 00:17:36.182 { 00:17:36.182 "name": null, 00:17:36.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.182 "is_configured": false, 00:17:36.182 "data_offset": 0, 00:17:36.182 "data_size": 63488 00:17:36.182 }, 00:17:36.182 { 00:17:36.182 "name": "BaseBdev3", 00:17:36.182 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:36.182 "is_configured": true, 00:17:36.182 "data_offset": 2048, 00:17:36.182 "data_size": 63488 00:17:36.182 }, 00:17:36.182 { 00:17:36.182 "name": "BaseBdev4", 00:17:36.182 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:36.182 "is_configured": true, 00:17:36.182 "data_offset": 2048, 00:17:36.182 "data_size": 63488 00:17:36.182 } 00:17:36.182 ] 00:17:36.182 }' 00:17:36.182 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.182 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:36.182 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.441 "name": "raid_bdev1", 00:17:36.441 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:36.441 "strip_size_kb": 0, 00:17:36.441 "state": "online", 00:17:36.441 "raid_level": "raid1", 00:17:36.441 "superblock": true, 00:17:36.441 "num_base_bdevs": 4, 00:17:36.441 "num_base_bdevs_discovered": 3, 00:17:36.441 "num_base_bdevs_operational": 3, 00:17:36.441 "base_bdevs_list": [ 00:17:36.441 { 00:17:36.441 "name": "spare", 00:17:36.441 "uuid": "40600fd0-3f2c-5a18-a96d-e0a89fff583f", 00:17:36.441 "is_configured": true, 00:17:36.441 "data_offset": 2048, 00:17:36.441 "data_size": 63488 00:17:36.441 }, 00:17:36.441 { 00:17:36.441 "name": null, 00:17:36.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.441 "is_configured": false, 00:17:36.441 "data_offset": 0, 00:17:36.441 "data_size": 63488 00:17:36.441 }, 00:17:36.441 { 00:17:36.441 "name": "BaseBdev3", 00:17:36.441 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:36.441 "is_configured": true, 00:17:36.441 "data_offset": 2048, 00:17:36.441 "data_size": 63488 00:17:36.441 }, 00:17:36.441 { 00:17:36.441 "name": "BaseBdev4", 00:17:36.441 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:36.441 "is_configured": true, 00:17:36.441 "data_offset": 2048, 00:17:36.441 "data_size": 63488 00:17:36.441 } 00:17:36.441 ] 00:17:36.441 }' 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:36.441 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.442 "name": "raid_bdev1", 00:17:36.442 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:36.442 "strip_size_kb": 0, 00:17:36.442 "state": "online", 00:17:36.442 "raid_level": "raid1", 00:17:36.442 "superblock": true, 00:17:36.442 "num_base_bdevs": 4, 00:17:36.442 "num_base_bdevs_discovered": 3, 00:17:36.442 "num_base_bdevs_operational": 3, 00:17:36.442 "base_bdevs_list": [ 00:17:36.442 { 00:17:36.442 "name": "spare", 00:17:36.442 "uuid": "40600fd0-3f2c-5a18-a96d-e0a89fff583f", 00:17:36.442 "is_configured": true, 00:17:36.442 "data_offset": 2048, 00:17:36.442 "data_size": 63488 00:17:36.442 }, 00:17:36.442 { 00:17:36.442 "name": null, 00:17:36.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.442 "is_configured": false, 00:17:36.442 "data_offset": 0, 00:17:36.442 "data_size": 63488 00:17:36.442 }, 00:17:36.442 { 00:17:36.442 "name": "BaseBdev3", 00:17:36.442 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:36.442 "is_configured": true, 00:17:36.442 "data_offset": 2048, 00:17:36.442 "data_size": 63488 00:17:36.442 }, 00:17:36.442 { 00:17:36.442 "name": "BaseBdev4", 00:17:36.442 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:36.442 "is_configured": true, 00:17:36.442 "data_offset": 2048, 00:17:36.442 "data_size": 63488 00:17:36.442 } 00:17:36.442 ] 00:17:36.442 }' 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.442 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.701 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:36.701 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.701 15:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.701 [2024-12-06 15:44:19.968009] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.701 [2024-12-06 15:44:19.968213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.960 00:17:36.960 Latency(us) 00:17:36.960 [2024-12-06T15:44:20.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.960 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:36.960 raid_bdev1 : 7.91 75.56 226.68 0.00 0.00 18707.40 352.03 113701.01 00:17:36.960 [2024-12-06T15:44:20.255Z] =================================================================================================================== 00:17:36.960 [2024-12-06T15:44:20.255Z] Total : 75.56 226.68 0.00 0.00 18707.40 352.03 113701.01 00:17:36.960 [2024-12-06 15:44:20.090481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.960 [2024-12-06 15:44:20.090592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.960 [2024-12-06 15:44:20.090714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.960 [2024-12-06 15:44:20.090733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:36.960 { 00:17:36.960 "results": [ 00:17:36.960 { 00:17:36.960 "job": "raid_bdev1", 00:17:36.960 "core_mask": "0x1", 00:17:36.960 "workload": "randrw", 00:17:36.960 "percentage": 50, 00:17:36.960 "status": "finished", 00:17:36.960 "queue_depth": 2, 00:17:36.960 "io_size": 3145728, 00:17:36.960 "runtime": 7.91439, 00:17:36.960 "iops": 75.5585711596219, 00:17:36.960 "mibps": 226.6757134788657, 00:17:36.960 "io_failed": 0, 00:17:36.960 "io_timeout": 0, 00:17:36.960 "avg_latency_us": 18707.404848826744, 00:17:36.960 "min_latency_us": 352.02570281124497, 00:17:36.960 "max_latency_us": 113701.01204819277 00:17:36.960 } 00:17:36.960 ], 00:17:36.960 "core_count": 1 00:17:36.960 } 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:36.960 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:37.219 /dev/nbd0 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:37.219 1+0 records in 00:17:37.219 1+0 records out 00:17:37.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257669 s, 15.9 MB/s 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:37.219 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:37.478 /dev/nbd1 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:37.478 1+0 records in 00:17:37.478 1+0 records out 00:17:37.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366742 s, 11.2 MB/s 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:37.478 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:37.736 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:37.736 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:37.736 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:37.736 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:37.736 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:37.736 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:37.736 15:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:37.996 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:38.255 /dev/nbd1 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:38.255 1+0 records in 00:17:38.255 1+0 records out 00:17:38.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00069737 s, 5.9 MB/s 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:38.255 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:38.512 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:38.512 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:38.512 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:38.512 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:38.512 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:38.512 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:38.512 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:38.512 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:38.512 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:38.512 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:38.512 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:38.512 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:38.512 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:38.512 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:38.512 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:38.769 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.770 [2024-12-06 15:44:21.944740] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:38.770 [2024-12-06 15:44:21.944814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.770 [2024-12-06 15:44:21.944850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:38.770 [2024-12-06 15:44:21.944867] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.770 [2024-12-06 15:44:21.947766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.770 [2024-12-06 15:44:21.947811] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:38.770 [2024-12-06 15:44:21.947920] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:38.770 [2024-12-06 15:44:21.947985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.770 [2024-12-06 15:44:21.948155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:38.770 [2024-12-06 15:44:21.948261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:38.770 spare 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.770 15:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.770 [2024-12-06 15:44:22.048187] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:38.770 [2024-12-06 15:44:22.048222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:38.770 [2024-12-06 15:44:22.048581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:17:38.770 [2024-12-06 15:44:22.048767] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:38.770 [2024-12-06 15:44:22.048777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:38.770 [2024-12-06 15:44:22.048984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.770 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.770 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:38.770 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.770 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.770 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.770 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.770 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:38.770 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.770 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.770 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.770 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.770 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.770 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.770 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.770 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.028 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.028 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.028 "name": "raid_bdev1", 00:17:39.028 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:39.028 "strip_size_kb": 0, 00:17:39.028 "state": "online", 00:17:39.028 "raid_level": "raid1", 00:17:39.028 "superblock": true, 00:17:39.028 "num_base_bdevs": 4, 00:17:39.028 "num_base_bdevs_discovered": 3, 00:17:39.028 "num_base_bdevs_operational": 3, 00:17:39.028 "base_bdevs_list": [ 00:17:39.028 { 00:17:39.028 "name": "spare", 00:17:39.028 "uuid": "40600fd0-3f2c-5a18-a96d-e0a89fff583f", 00:17:39.028 "is_configured": true, 00:17:39.028 "data_offset": 2048, 00:17:39.028 "data_size": 63488 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "name": null, 00:17:39.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.028 "is_configured": false, 00:17:39.028 "data_offset": 2048, 00:17:39.028 "data_size": 63488 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "name": "BaseBdev3", 00:17:39.028 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:39.028 "is_configured": true, 00:17:39.028 "data_offset": 2048, 00:17:39.028 "data_size": 63488 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "name": "BaseBdev4", 00:17:39.028 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:39.028 "is_configured": true, 00:17:39.028 "data_offset": 2048, 00:17:39.028 "data_size": 63488 00:17:39.028 } 00:17:39.028 ] 00:17:39.028 }' 00:17:39.028 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.028 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.285 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.285 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.285 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.285 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.285 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.285 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.285 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.285 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.285 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.285 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.285 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.285 "name": "raid_bdev1", 00:17:39.285 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:39.285 "strip_size_kb": 0, 00:17:39.285 "state": "online", 00:17:39.285 "raid_level": "raid1", 00:17:39.285 "superblock": true, 00:17:39.285 "num_base_bdevs": 4, 00:17:39.285 "num_base_bdevs_discovered": 3, 00:17:39.285 "num_base_bdevs_operational": 3, 00:17:39.285 "base_bdevs_list": [ 00:17:39.285 { 00:17:39.285 "name": "spare", 00:17:39.285 "uuid": "40600fd0-3f2c-5a18-a96d-e0a89fff583f", 00:17:39.285 "is_configured": true, 00:17:39.285 "data_offset": 2048, 00:17:39.285 "data_size": 63488 00:17:39.285 }, 00:17:39.285 { 00:17:39.285 "name": null, 00:17:39.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.285 "is_configured": false, 00:17:39.285 "data_offset": 2048, 00:17:39.285 "data_size": 63488 00:17:39.285 }, 00:17:39.285 { 00:17:39.285 "name": "BaseBdev3", 00:17:39.285 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:39.285 "is_configured": true, 00:17:39.285 "data_offset": 2048, 00:17:39.285 "data_size": 63488 00:17:39.285 }, 00:17:39.285 { 00:17:39.285 "name": "BaseBdev4", 00:17:39.285 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:39.285 "is_configured": true, 00:17:39.285 "data_offset": 2048, 00:17:39.285 "data_size": 63488 00:17:39.285 } 00:17:39.285 ] 00:17:39.285 }' 00:17:39.285 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.285 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.285 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.542 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.543 [2024-12-06 15:44:22.632231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.543 "name": "raid_bdev1", 00:17:39.543 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:39.543 "strip_size_kb": 0, 00:17:39.543 "state": "online", 00:17:39.543 "raid_level": "raid1", 00:17:39.543 "superblock": true, 00:17:39.543 "num_base_bdevs": 4, 00:17:39.543 "num_base_bdevs_discovered": 2, 00:17:39.543 "num_base_bdevs_operational": 2, 00:17:39.543 "base_bdevs_list": [ 00:17:39.543 { 00:17:39.543 "name": null, 00:17:39.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.543 "is_configured": false, 00:17:39.543 "data_offset": 0, 00:17:39.543 "data_size": 63488 00:17:39.543 }, 00:17:39.543 { 00:17:39.543 "name": null, 00:17:39.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.543 "is_configured": false, 00:17:39.543 "data_offset": 2048, 00:17:39.543 "data_size": 63488 00:17:39.543 }, 00:17:39.543 { 00:17:39.543 "name": "BaseBdev3", 00:17:39.543 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:39.543 "is_configured": true, 00:17:39.543 "data_offset": 2048, 00:17:39.543 "data_size": 63488 00:17:39.543 }, 00:17:39.543 { 00:17:39.543 "name": "BaseBdev4", 00:17:39.543 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:39.543 "is_configured": true, 00:17:39.543 "data_offset": 2048, 00:17:39.543 "data_size": 63488 00:17:39.543 } 00:17:39.543 ] 00:17:39.543 }' 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.543 15:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.800 15:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:39.800 15:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.800 15:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.800 [2024-12-06 15:44:23.047729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.800 [2024-12-06 15:44:23.048092] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:39.800 [2024-12-06 15:44:23.048127] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:39.800 [2024-12-06 15:44:23.048175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.800 [2024-12-06 15:44:23.063826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:17:39.800 15:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.800 15:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:39.800 [2024-12-06 15:44:23.066268] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.176 "name": "raid_bdev1", 00:17:41.176 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:41.176 "strip_size_kb": 0, 00:17:41.176 "state": "online", 00:17:41.176 "raid_level": "raid1", 00:17:41.176 "superblock": true, 00:17:41.176 "num_base_bdevs": 4, 00:17:41.176 "num_base_bdevs_discovered": 3, 00:17:41.176 "num_base_bdevs_operational": 3, 00:17:41.176 "process": { 00:17:41.176 "type": "rebuild", 00:17:41.176 "target": "spare", 00:17:41.176 "progress": { 00:17:41.176 "blocks": 20480, 00:17:41.176 "percent": 32 00:17:41.176 } 00:17:41.176 }, 00:17:41.176 "base_bdevs_list": [ 00:17:41.176 { 00:17:41.176 "name": "spare", 00:17:41.176 "uuid": "40600fd0-3f2c-5a18-a96d-e0a89fff583f", 00:17:41.176 "is_configured": true, 00:17:41.176 "data_offset": 2048, 00:17:41.176 "data_size": 63488 00:17:41.176 }, 00:17:41.176 { 00:17:41.176 "name": null, 00:17:41.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.176 "is_configured": false, 00:17:41.176 "data_offset": 2048, 00:17:41.176 "data_size": 63488 00:17:41.176 }, 00:17:41.176 { 00:17:41.176 "name": "BaseBdev3", 00:17:41.176 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:41.176 "is_configured": true, 00:17:41.176 "data_offset": 2048, 00:17:41.176 "data_size": 63488 00:17:41.176 }, 00:17:41.176 { 00:17:41.176 "name": "BaseBdev4", 00:17:41.176 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:41.176 "is_configured": true, 00:17:41.176 "data_offset": 2048, 00:17:41.176 "data_size": 63488 00:17:41.176 } 00:17:41.176 ] 00:17:41.176 }' 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.176 [2024-12-06 15:44:24.202574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.176 [2024-12-06 15:44:24.275214] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:41.176 [2024-12-06 15:44:24.275448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.176 [2024-12-06 15:44:24.275472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.176 [2024-12-06 15:44:24.275487] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.176 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.177 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:41.177 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.177 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.177 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.177 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.177 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.177 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.177 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.177 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.177 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.177 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.177 "name": "raid_bdev1", 00:17:41.177 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:41.177 "strip_size_kb": 0, 00:17:41.177 "state": "online", 00:17:41.177 "raid_level": "raid1", 00:17:41.177 "superblock": true, 00:17:41.177 "num_base_bdevs": 4, 00:17:41.177 "num_base_bdevs_discovered": 2, 00:17:41.177 "num_base_bdevs_operational": 2, 00:17:41.177 "base_bdevs_list": [ 00:17:41.177 { 00:17:41.177 "name": null, 00:17:41.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.177 "is_configured": false, 00:17:41.177 "data_offset": 0, 00:17:41.177 "data_size": 63488 00:17:41.177 }, 00:17:41.177 { 00:17:41.177 "name": null, 00:17:41.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.177 "is_configured": false, 00:17:41.177 "data_offset": 2048, 00:17:41.177 "data_size": 63488 00:17:41.177 }, 00:17:41.177 { 00:17:41.177 "name": "BaseBdev3", 00:17:41.177 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:41.177 "is_configured": true, 00:17:41.177 "data_offset": 2048, 00:17:41.177 "data_size": 63488 00:17:41.177 }, 00:17:41.177 { 00:17:41.177 "name": "BaseBdev4", 00:17:41.177 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:41.177 "is_configured": true, 00:17:41.177 "data_offset": 2048, 00:17:41.177 "data_size": 63488 00:17:41.177 } 00:17:41.177 ] 00:17:41.177 }' 00:17:41.177 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.177 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.743 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:41.743 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.743 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.743 [2024-12-06 15:44:24.746360] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:41.743 [2024-12-06 15:44:24.746448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.743 [2024-12-06 15:44:24.746487] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:41.743 [2024-12-06 15:44:24.746515] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.743 [2024-12-06 15:44:24.747097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.743 [2024-12-06 15:44:24.747128] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:41.743 [2024-12-06 15:44:24.747242] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:41.743 [2024-12-06 15:44:24.747262] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:41.743 [2024-12-06 15:44:24.747275] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:41.743 [2024-12-06 15:44:24.747308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:41.743 [2024-12-06 15:44:24.763409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:17:41.743 spare 00:17:41.743 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.743 15:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:41.743 [2024-12-06 15:44:24.765861] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.677 "name": "raid_bdev1", 00:17:42.677 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:42.677 "strip_size_kb": 0, 00:17:42.677 "state": "online", 00:17:42.677 "raid_level": "raid1", 00:17:42.677 "superblock": true, 00:17:42.677 "num_base_bdevs": 4, 00:17:42.677 "num_base_bdevs_discovered": 3, 00:17:42.677 "num_base_bdevs_operational": 3, 00:17:42.677 "process": { 00:17:42.677 "type": "rebuild", 00:17:42.677 "target": "spare", 00:17:42.677 "progress": { 00:17:42.677 "blocks": 20480, 00:17:42.677 "percent": 32 00:17:42.677 } 00:17:42.677 }, 00:17:42.677 "base_bdevs_list": [ 00:17:42.677 { 00:17:42.677 "name": "spare", 00:17:42.677 "uuid": "40600fd0-3f2c-5a18-a96d-e0a89fff583f", 00:17:42.677 "is_configured": true, 00:17:42.677 "data_offset": 2048, 00:17:42.677 "data_size": 63488 00:17:42.677 }, 00:17:42.677 { 00:17:42.677 "name": null, 00:17:42.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.677 "is_configured": false, 00:17:42.677 "data_offset": 2048, 00:17:42.677 "data_size": 63488 00:17:42.677 }, 00:17:42.677 { 00:17:42.677 "name": "BaseBdev3", 00:17:42.677 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:42.677 "is_configured": true, 00:17:42.677 "data_offset": 2048, 00:17:42.677 "data_size": 63488 00:17:42.677 }, 00:17:42.677 { 00:17:42.677 "name": "BaseBdev4", 00:17:42.677 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:42.677 "is_configured": true, 00:17:42.677 "data_offset": 2048, 00:17:42.677 "data_size": 63488 00:17:42.677 } 00:17:42.677 ] 00:17:42.677 }' 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.677 15:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.677 [2024-12-06 15:44:25.914099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.936 [2024-12-06 15:44:25.974783] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:42.936 [2024-12-06 15:44:25.974889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.936 [2024-12-06 15:44:25.974914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.936 [2024-12-06 15:44:25.974924] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:42.936 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.936 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:42.936 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.936 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.936 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.936 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.936 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.936 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.936 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.936 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.937 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.937 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.937 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.937 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.937 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.937 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.937 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.937 "name": "raid_bdev1", 00:17:42.937 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:42.937 "strip_size_kb": 0, 00:17:42.937 "state": "online", 00:17:42.937 "raid_level": "raid1", 00:17:42.937 "superblock": true, 00:17:42.937 "num_base_bdevs": 4, 00:17:42.937 "num_base_bdevs_discovered": 2, 00:17:42.937 "num_base_bdevs_operational": 2, 00:17:42.937 "base_bdevs_list": [ 00:17:42.937 { 00:17:42.937 "name": null, 00:17:42.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.937 "is_configured": false, 00:17:42.937 "data_offset": 0, 00:17:42.937 "data_size": 63488 00:17:42.937 }, 00:17:42.937 { 00:17:42.937 "name": null, 00:17:42.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.937 "is_configured": false, 00:17:42.937 "data_offset": 2048, 00:17:42.937 "data_size": 63488 00:17:42.937 }, 00:17:42.937 { 00:17:42.937 "name": "BaseBdev3", 00:17:42.937 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:42.937 "is_configured": true, 00:17:42.937 "data_offset": 2048, 00:17:42.937 "data_size": 63488 00:17:42.937 }, 00:17:42.937 { 00:17:42.937 "name": "BaseBdev4", 00:17:42.937 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:42.937 "is_configured": true, 00:17:42.937 "data_offset": 2048, 00:17:42.937 "data_size": 63488 00:17:42.937 } 00:17:42.937 ] 00:17:42.937 }' 00:17:42.937 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.937 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.242 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.242 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.242 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.242 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.242 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.242 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.242 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.242 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.242 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.242 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.242 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.242 "name": "raid_bdev1", 00:17:43.242 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:43.242 "strip_size_kb": 0, 00:17:43.242 "state": "online", 00:17:43.242 "raid_level": "raid1", 00:17:43.242 "superblock": true, 00:17:43.242 "num_base_bdevs": 4, 00:17:43.242 "num_base_bdevs_discovered": 2, 00:17:43.242 "num_base_bdevs_operational": 2, 00:17:43.242 "base_bdevs_list": [ 00:17:43.242 { 00:17:43.242 "name": null, 00:17:43.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.242 "is_configured": false, 00:17:43.242 "data_offset": 0, 00:17:43.242 "data_size": 63488 00:17:43.242 }, 00:17:43.242 { 00:17:43.242 "name": null, 00:17:43.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.242 "is_configured": false, 00:17:43.242 "data_offset": 2048, 00:17:43.242 "data_size": 63488 00:17:43.242 }, 00:17:43.242 { 00:17:43.242 "name": "BaseBdev3", 00:17:43.242 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:43.242 "is_configured": true, 00:17:43.242 "data_offset": 2048, 00:17:43.242 "data_size": 63488 00:17:43.242 }, 00:17:43.242 { 00:17:43.242 "name": "BaseBdev4", 00:17:43.242 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:43.242 "is_configured": true, 00:17:43.242 "data_offset": 2048, 00:17:43.242 "data_size": 63488 00:17:43.242 } 00:17:43.242 ] 00:17:43.242 }' 00:17:43.242 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.242 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:43.242 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.508 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:43.508 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:43.508 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.508 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.508 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.508 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:43.508 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.508 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.508 [2024-12-06 15:44:26.574245] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:43.508 [2024-12-06 15:44:26.574447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.508 [2024-12-06 15:44:26.574488] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:17:43.508 [2024-12-06 15:44:26.574510] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.508 [2024-12-06 15:44:26.575065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.508 [2024-12-06 15:44:26.575086] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:43.508 [2024-12-06 15:44:26.575189] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:43.508 [2024-12-06 15:44:26.575211] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:43.508 [2024-12-06 15:44:26.575224] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:43.508 [2024-12-06 15:44:26.575238] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:43.508 BaseBdev1 00:17:43.508 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.508 15:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.445 "name": "raid_bdev1", 00:17:44.445 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:44.445 "strip_size_kb": 0, 00:17:44.445 "state": "online", 00:17:44.445 "raid_level": "raid1", 00:17:44.445 "superblock": true, 00:17:44.445 "num_base_bdevs": 4, 00:17:44.445 "num_base_bdevs_discovered": 2, 00:17:44.445 "num_base_bdevs_operational": 2, 00:17:44.445 "base_bdevs_list": [ 00:17:44.445 { 00:17:44.445 "name": null, 00:17:44.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.445 "is_configured": false, 00:17:44.445 "data_offset": 0, 00:17:44.445 "data_size": 63488 00:17:44.445 }, 00:17:44.445 { 00:17:44.445 "name": null, 00:17:44.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.445 "is_configured": false, 00:17:44.445 "data_offset": 2048, 00:17:44.445 "data_size": 63488 00:17:44.445 }, 00:17:44.445 { 00:17:44.445 "name": "BaseBdev3", 00:17:44.445 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:44.445 "is_configured": true, 00:17:44.445 "data_offset": 2048, 00:17:44.445 "data_size": 63488 00:17:44.445 }, 00:17:44.445 { 00:17:44.445 "name": "BaseBdev4", 00:17:44.445 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:44.445 "is_configured": true, 00:17:44.445 "data_offset": 2048, 00:17:44.445 "data_size": 63488 00:17:44.445 } 00:17:44.445 ] 00:17:44.445 }' 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.445 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.705 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:44.705 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.705 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:44.965 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:44.965 15:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.965 "name": "raid_bdev1", 00:17:44.965 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:44.965 "strip_size_kb": 0, 00:17:44.965 "state": "online", 00:17:44.965 "raid_level": "raid1", 00:17:44.965 "superblock": true, 00:17:44.965 "num_base_bdevs": 4, 00:17:44.965 "num_base_bdevs_discovered": 2, 00:17:44.965 "num_base_bdevs_operational": 2, 00:17:44.965 "base_bdevs_list": [ 00:17:44.965 { 00:17:44.965 "name": null, 00:17:44.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.965 "is_configured": false, 00:17:44.965 "data_offset": 0, 00:17:44.965 "data_size": 63488 00:17:44.965 }, 00:17:44.965 { 00:17:44.965 "name": null, 00:17:44.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.965 "is_configured": false, 00:17:44.965 "data_offset": 2048, 00:17:44.965 "data_size": 63488 00:17:44.965 }, 00:17:44.965 { 00:17:44.965 "name": "BaseBdev3", 00:17:44.965 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:44.965 "is_configured": true, 00:17:44.965 "data_offset": 2048, 00:17:44.965 "data_size": 63488 00:17:44.965 }, 00:17:44.965 { 00:17:44.965 "name": "BaseBdev4", 00:17:44.965 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:44.965 "is_configured": true, 00:17:44.965 "data_offset": 2048, 00:17:44.965 "data_size": 63488 00:17:44.965 } 00:17:44.965 ] 00:17:44.965 }' 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.965 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.965 [2024-12-06 15:44:28.149155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.965 [2024-12-06 15:44:28.149367] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:44.965 [2024-12-06 15:44:28.149389] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:44.965 request: 00:17:44.965 { 00:17:44.965 "base_bdev": "BaseBdev1", 00:17:44.965 "raid_bdev": "raid_bdev1", 00:17:44.965 "method": "bdev_raid_add_base_bdev", 00:17:44.965 "req_id": 1 00:17:44.965 } 00:17:44.965 Got JSON-RPC error response 00:17:44.965 response: 00:17:44.966 { 00:17:44.966 "code": -22, 00:17:44.966 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:44.966 } 00:17:44.966 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:44.966 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:17:44.966 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.966 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.966 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.966 15:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:45.902 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:45.902 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.902 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.902 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.902 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.902 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:45.902 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.902 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.902 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.902 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.902 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.902 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.902 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.902 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:46.162 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.162 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.162 "name": "raid_bdev1", 00:17:46.162 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:46.162 "strip_size_kb": 0, 00:17:46.162 "state": "online", 00:17:46.162 "raid_level": "raid1", 00:17:46.162 "superblock": true, 00:17:46.162 "num_base_bdevs": 4, 00:17:46.162 "num_base_bdevs_discovered": 2, 00:17:46.162 "num_base_bdevs_operational": 2, 00:17:46.162 "base_bdevs_list": [ 00:17:46.162 { 00:17:46.162 "name": null, 00:17:46.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.162 "is_configured": false, 00:17:46.162 "data_offset": 0, 00:17:46.162 "data_size": 63488 00:17:46.162 }, 00:17:46.162 { 00:17:46.162 "name": null, 00:17:46.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.162 "is_configured": false, 00:17:46.162 "data_offset": 2048, 00:17:46.162 "data_size": 63488 00:17:46.162 }, 00:17:46.162 { 00:17:46.162 "name": "BaseBdev3", 00:17:46.162 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:46.162 "is_configured": true, 00:17:46.162 "data_offset": 2048, 00:17:46.162 "data_size": 63488 00:17:46.162 }, 00:17:46.162 { 00:17:46.162 "name": "BaseBdev4", 00:17:46.162 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:46.162 "is_configured": true, 00:17:46.162 "data_offset": 2048, 00:17:46.162 "data_size": 63488 00:17:46.162 } 00:17:46.162 ] 00:17:46.162 }' 00:17:46.162 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.162 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.421 "name": "raid_bdev1", 00:17:46.421 "uuid": "e9d12d7b-533a-4d11-bdf1-6e4c687b79c8", 00:17:46.421 "strip_size_kb": 0, 00:17:46.421 "state": "online", 00:17:46.421 "raid_level": "raid1", 00:17:46.421 "superblock": true, 00:17:46.421 "num_base_bdevs": 4, 00:17:46.421 "num_base_bdevs_discovered": 2, 00:17:46.421 "num_base_bdevs_operational": 2, 00:17:46.421 "base_bdevs_list": [ 00:17:46.421 { 00:17:46.421 "name": null, 00:17:46.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.421 "is_configured": false, 00:17:46.421 "data_offset": 0, 00:17:46.421 "data_size": 63488 00:17:46.421 }, 00:17:46.421 { 00:17:46.421 "name": null, 00:17:46.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.421 "is_configured": false, 00:17:46.421 "data_offset": 2048, 00:17:46.421 "data_size": 63488 00:17:46.421 }, 00:17:46.421 { 00:17:46.421 "name": "BaseBdev3", 00:17:46.421 "uuid": "18ba49bb-decd-57d0-b2df-999026f6bdc4", 00:17:46.421 "is_configured": true, 00:17:46.421 "data_offset": 2048, 00:17:46.421 "data_size": 63488 00:17:46.421 }, 00:17:46.421 { 00:17:46.421 "name": "BaseBdev4", 00:17:46.421 "uuid": "db012dc0-07dc-554d-baa8-2f0548984f52", 00:17:46.421 "is_configured": true, 00:17:46.421 "data_offset": 2048, 00:17:46.421 "data_size": 63488 00:17:46.421 } 00:17:46.421 ] 00:17:46.421 }' 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79188 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79188 ']' 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79188 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.421 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79188 00:17:46.681 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.681 killing process with pid 79188 00:17:46.681 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.681 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79188' 00:17:46.681 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79188 00:17:46.681 Received shutdown signal, test time was about 17.596523 seconds 00:17:46.681 00:17:46.681 Latency(us) 00:17:46.681 [2024-12-06T15:44:29.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.681 [2024-12-06T15:44:29.976Z] =================================================================================================================== 00:17:46.681 [2024-12-06T15:44:29.976Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:46.681 [2024-12-06 15:44:29.734221] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:46.681 15:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79188 00:17:46.681 [2024-12-06 15:44:29.734378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.681 [2024-12-06 15:44:29.734459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.681 [2024-12-06 15:44:29.734474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:46.940 [2024-12-06 15:44:30.189929] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:48.318 15:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:48.318 00:17:48.318 real 0m21.238s 00:17:48.318 user 0m27.067s 00:17:48.318 sys 0m3.157s 00:17:48.318 15:44:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:48.318 ************************************ 00:17:48.318 END TEST raid_rebuild_test_sb_io 00:17:48.318 ************************************ 00:17:48.318 15:44:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:48.318 15:44:31 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:48.318 15:44:31 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:17:48.318 15:44:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:48.318 15:44:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.318 15:44:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:48.318 ************************************ 00:17:48.318 START TEST raid5f_state_function_test 00:17:48.318 ************************************ 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79904 00:17:48.318 Process raid pid: 79904 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79904' 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79904 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79904 ']' 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.318 15:44:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.577 [2024-12-06 15:44:31.682927] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:17:48.577 [2024-12-06 15:44:31.683054] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.836 [2024-12-06 15:44:31.871169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.836 [2024-12-06 15:44:32.008973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.095 [2024-12-06 15:44:32.251853] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.095 [2024-12-06 15:44:32.251907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.355 [2024-12-06 15:44:32.515373] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:49.355 [2024-12-06 15:44:32.515438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:49.355 [2024-12-06 15:44:32.515450] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.355 [2024-12-06 15:44:32.515464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.355 [2024-12-06 15:44:32.515472] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:49.355 [2024-12-06 15:44:32.515484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.355 "name": "Existed_Raid", 00:17:49.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.355 "strip_size_kb": 64, 00:17:49.355 "state": "configuring", 00:17:49.355 "raid_level": "raid5f", 00:17:49.355 "superblock": false, 00:17:49.355 "num_base_bdevs": 3, 00:17:49.355 "num_base_bdevs_discovered": 0, 00:17:49.355 "num_base_bdevs_operational": 3, 00:17:49.355 "base_bdevs_list": [ 00:17:49.355 { 00:17:49.355 "name": "BaseBdev1", 00:17:49.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.355 "is_configured": false, 00:17:49.355 "data_offset": 0, 00:17:49.355 "data_size": 0 00:17:49.355 }, 00:17:49.355 { 00:17:49.355 "name": "BaseBdev2", 00:17:49.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.355 "is_configured": false, 00:17:49.355 "data_offset": 0, 00:17:49.355 "data_size": 0 00:17:49.355 }, 00:17:49.355 { 00:17:49.355 "name": "BaseBdev3", 00:17:49.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.355 "is_configured": false, 00:17:49.355 "data_offset": 0, 00:17:49.355 "data_size": 0 00:17:49.355 } 00:17:49.355 ] 00:17:49.355 }' 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.355 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.924 [2024-12-06 15:44:32.918751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:49.924 [2024-12-06 15:44:32.918799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.924 [2024-12-06 15:44:32.930718] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:49.924 [2024-12-06 15:44:32.930766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:49.924 [2024-12-06 15:44:32.930777] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.924 [2024-12-06 15:44:32.930791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.924 [2024-12-06 15:44:32.930798] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:49.924 [2024-12-06 15:44:32.930811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.924 [2024-12-06 15:44:32.985667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.924 BaseBdev1 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.924 15:44:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.924 [ 00:17:49.924 { 00:17:49.924 "name": "BaseBdev1", 00:17:49.924 "aliases": [ 00:17:49.924 "d73e70ac-87b9-448f-aa55-41c1365f2008" 00:17:49.924 ], 00:17:49.924 "product_name": "Malloc disk", 00:17:49.924 "block_size": 512, 00:17:49.924 "num_blocks": 65536, 00:17:49.924 "uuid": "d73e70ac-87b9-448f-aa55-41c1365f2008", 00:17:49.924 "assigned_rate_limits": { 00:17:49.924 "rw_ios_per_sec": 0, 00:17:49.924 "rw_mbytes_per_sec": 0, 00:17:49.924 "r_mbytes_per_sec": 0, 00:17:49.924 "w_mbytes_per_sec": 0 00:17:49.924 }, 00:17:49.924 "claimed": true, 00:17:49.924 "claim_type": "exclusive_write", 00:17:49.924 "zoned": false, 00:17:49.924 "supported_io_types": { 00:17:49.924 "read": true, 00:17:49.924 "write": true, 00:17:49.924 "unmap": true, 00:17:49.924 "flush": true, 00:17:49.924 "reset": true, 00:17:49.924 "nvme_admin": false, 00:17:49.924 "nvme_io": false, 00:17:49.924 "nvme_io_md": false, 00:17:49.924 "write_zeroes": true, 00:17:49.924 "zcopy": true, 00:17:49.924 "get_zone_info": false, 00:17:49.924 "zone_management": false, 00:17:49.924 "zone_append": false, 00:17:49.924 "compare": false, 00:17:49.924 "compare_and_write": false, 00:17:49.924 "abort": true, 00:17:49.924 "seek_hole": false, 00:17:49.924 "seek_data": false, 00:17:49.924 "copy": true, 00:17:49.924 "nvme_iov_md": false 00:17:49.924 }, 00:17:49.924 "memory_domains": [ 00:17:49.924 { 00:17:49.924 "dma_device_id": "system", 00:17:49.924 "dma_device_type": 1 00:17:49.924 }, 00:17:49.924 { 00:17:49.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.924 "dma_device_type": 2 00:17:49.924 } 00:17:49.924 ], 00:17:49.924 "driver_specific": {} 00:17:49.924 } 00:17:49.924 ] 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.924 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.924 "name": "Existed_Raid", 00:17:49.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.924 "strip_size_kb": 64, 00:17:49.924 "state": "configuring", 00:17:49.924 "raid_level": "raid5f", 00:17:49.924 "superblock": false, 00:17:49.924 "num_base_bdevs": 3, 00:17:49.924 "num_base_bdevs_discovered": 1, 00:17:49.925 "num_base_bdevs_operational": 3, 00:17:49.925 "base_bdevs_list": [ 00:17:49.925 { 00:17:49.925 "name": "BaseBdev1", 00:17:49.925 "uuid": "d73e70ac-87b9-448f-aa55-41c1365f2008", 00:17:49.925 "is_configured": true, 00:17:49.925 "data_offset": 0, 00:17:49.925 "data_size": 65536 00:17:49.925 }, 00:17:49.925 { 00:17:49.925 "name": "BaseBdev2", 00:17:49.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.925 "is_configured": false, 00:17:49.925 "data_offset": 0, 00:17:49.925 "data_size": 0 00:17:49.925 }, 00:17:49.925 { 00:17:49.925 "name": "BaseBdev3", 00:17:49.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.925 "is_configured": false, 00:17:49.925 "data_offset": 0, 00:17:49.925 "data_size": 0 00:17:49.925 } 00:17:49.925 ] 00:17:49.925 }' 00:17:49.925 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.925 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.184 [2024-12-06 15:44:33.445210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:50.184 [2024-12-06 15:44:33.445380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.184 [2024-12-06 15:44:33.453266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.184 [2024-12-06 15:44:33.455761] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:50.184 [2024-12-06 15:44:33.455830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:50.184 [2024-12-06 15:44:33.455930] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:50.184 [2024-12-06 15:44:33.455975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.184 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.185 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.185 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.185 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.185 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.185 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.444 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.444 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.444 "name": "Existed_Raid", 00:17:50.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.444 "strip_size_kb": 64, 00:17:50.444 "state": "configuring", 00:17:50.444 "raid_level": "raid5f", 00:17:50.444 "superblock": false, 00:17:50.444 "num_base_bdevs": 3, 00:17:50.444 "num_base_bdevs_discovered": 1, 00:17:50.444 "num_base_bdevs_operational": 3, 00:17:50.444 "base_bdevs_list": [ 00:17:50.444 { 00:17:50.444 "name": "BaseBdev1", 00:17:50.444 "uuid": "d73e70ac-87b9-448f-aa55-41c1365f2008", 00:17:50.444 "is_configured": true, 00:17:50.444 "data_offset": 0, 00:17:50.444 "data_size": 65536 00:17:50.444 }, 00:17:50.444 { 00:17:50.444 "name": "BaseBdev2", 00:17:50.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.444 "is_configured": false, 00:17:50.444 "data_offset": 0, 00:17:50.444 "data_size": 0 00:17:50.444 }, 00:17:50.444 { 00:17:50.444 "name": "BaseBdev3", 00:17:50.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.444 "is_configured": false, 00:17:50.444 "data_offset": 0, 00:17:50.444 "data_size": 0 00:17:50.444 } 00:17:50.444 ] 00:17:50.444 }' 00:17:50.444 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.444 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.703 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:50.703 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.703 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.703 [2024-12-06 15:44:33.867035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:50.703 BaseBdev2 00:17:50.703 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.703 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:50.703 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:50.703 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:50.703 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:50.703 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.704 [ 00:17:50.704 { 00:17:50.704 "name": "BaseBdev2", 00:17:50.704 "aliases": [ 00:17:50.704 "e846419b-5364-4f17-acbc-ebeae7669241" 00:17:50.704 ], 00:17:50.704 "product_name": "Malloc disk", 00:17:50.704 "block_size": 512, 00:17:50.704 "num_blocks": 65536, 00:17:50.704 "uuid": "e846419b-5364-4f17-acbc-ebeae7669241", 00:17:50.704 "assigned_rate_limits": { 00:17:50.704 "rw_ios_per_sec": 0, 00:17:50.704 "rw_mbytes_per_sec": 0, 00:17:50.704 "r_mbytes_per_sec": 0, 00:17:50.704 "w_mbytes_per_sec": 0 00:17:50.704 }, 00:17:50.704 "claimed": true, 00:17:50.704 "claim_type": "exclusive_write", 00:17:50.704 "zoned": false, 00:17:50.704 "supported_io_types": { 00:17:50.704 "read": true, 00:17:50.704 "write": true, 00:17:50.704 "unmap": true, 00:17:50.704 "flush": true, 00:17:50.704 "reset": true, 00:17:50.704 "nvme_admin": false, 00:17:50.704 "nvme_io": false, 00:17:50.704 "nvme_io_md": false, 00:17:50.704 "write_zeroes": true, 00:17:50.704 "zcopy": true, 00:17:50.704 "get_zone_info": false, 00:17:50.704 "zone_management": false, 00:17:50.704 "zone_append": false, 00:17:50.704 "compare": false, 00:17:50.704 "compare_and_write": false, 00:17:50.704 "abort": true, 00:17:50.704 "seek_hole": false, 00:17:50.704 "seek_data": false, 00:17:50.704 "copy": true, 00:17:50.704 "nvme_iov_md": false 00:17:50.704 }, 00:17:50.704 "memory_domains": [ 00:17:50.704 { 00:17:50.704 "dma_device_id": "system", 00:17:50.704 "dma_device_type": 1 00:17:50.704 }, 00:17:50.704 { 00:17:50.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.704 "dma_device_type": 2 00:17:50.704 } 00:17:50.704 ], 00:17:50.704 "driver_specific": {} 00:17:50.704 } 00:17:50.704 ] 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.704 "name": "Existed_Raid", 00:17:50.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.704 "strip_size_kb": 64, 00:17:50.704 "state": "configuring", 00:17:50.704 "raid_level": "raid5f", 00:17:50.704 "superblock": false, 00:17:50.704 "num_base_bdevs": 3, 00:17:50.704 "num_base_bdevs_discovered": 2, 00:17:50.704 "num_base_bdevs_operational": 3, 00:17:50.704 "base_bdevs_list": [ 00:17:50.704 { 00:17:50.704 "name": "BaseBdev1", 00:17:50.704 "uuid": "d73e70ac-87b9-448f-aa55-41c1365f2008", 00:17:50.704 "is_configured": true, 00:17:50.704 "data_offset": 0, 00:17:50.704 "data_size": 65536 00:17:50.704 }, 00:17:50.704 { 00:17:50.704 "name": "BaseBdev2", 00:17:50.704 "uuid": "e846419b-5364-4f17-acbc-ebeae7669241", 00:17:50.704 "is_configured": true, 00:17:50.704 "data_offset": 0, 00:17:50.704 "data_size": 65536 00:17:50.704 }, 00:17:50.704 { 00:17:50.704 "name": "BaseBdev3", 00:17:50.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.704 "is_configured": false, 00:17:50.704 "data_offset": 0, 00:17:50.704 "data_size": 0 00:17:50.704 } 00:17:50.704 ] 00:17:50.704 }' 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.704 15:44:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.285 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:51.285 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.285 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.285 [2024-12-06 15:44:34.414768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:51.285 [2024-12-06 15:44:34.415106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:51.285 [2024-12-06 15:44:34.415167] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:51.285 [2024-12-06 15:44:34.415626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:51.285 [2024-12-06 15:44:34.421765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:51.285 [2024-12-06 15:44:34.421891] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:51.285 [2024-12-06 15:44:34.422422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.285 BaseBdev3 00:17:51.285 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.285 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.286 [ 00:17:51.286 { 00:17:51.286 "name": "BaseBdev3", 00:17:51.286 "aliases": [ 00:17:51.286 "d724572f-7cdf-45aa-b177-a448f8fd9f85" 00:17:51.286 ], 00:17:51.286 "product_name": "Malloc disk", 00:17:51.286 "block_size": 512, 00:17:51.286 "num_blocks": 65536, 00:17:51.286 "uuid": "d724572f-7cdf-45aa-b177-a448f8fd9f85", 00:17:51.286 "assigned_rate_limits": { 00:17:51.286 "rw_ios_per_sec": 0, 00:17:51.286 "rw_mbytes_per_sec": 0, 00:17:51.286 "r_mbytes_per_sec": 0, 00:17:51.286 "w_mbytes_per_sec": 0 00:17:51.286 }, 00:17:51.286 "claimed": true, 00:17:51.286 "claim_type": "exclusive_write", 00:17:51.286 "zoned": false, 00:17:51.286 "supported_io_types": { 00:17:51.286 "read": true, 00:17:51.286 "write": true, 00:17:51.286 "unmap": true, 00:17:51.286 "flush": true, 00:17:51.286 "reset": true, 00:17:51.286 "nvme_admin": false, 00:17:51.286 "nvme_io": false, 00:17:51.286 "nvme_io_md": false, 00:17:51.286 "write_zeroes": true, 00:17:51.286 "zcopy": true, 00:17:51.286 "get_zone_info": false, 00:17:51.286 "zone_management": false, 00:17:51.286 "zone_append": false, 00:17:51.286 "compare": false, 00:17:51.286 "compare_and_write": false, 00:17:51.286 "abort": true, 00:17:51.286 "seek_hole": false, 00:17:51.286 "seek_data": false, 00:17:51.286 "copy": true, 00:17:51.286 "nvme_iov_md": false 00:17:51.286 }, 00:17:51.286 "memory_domains": [ 00:17:51.286 { 00:17:51.286 "dma_device_id": "system", 00:17:51.286 "dma_device_type": 1 00:17:51.286 }, 00:17:51.286 { 00:17:51.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.286 "dma_device_type": 2 00:17:51.286 } 00:17:51.286 ], 00:17:51.286 "driver_specific": {} 00:17:51.286 } 00:17:51.286 ] 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.286 "name": "Existed_Raid", 00:17:51.286 "uuid": "f91062f1-c1d1-410e-a080-505556ffe138", 00:17:51.286 "strip_size_kb": 64, 00:17:51.286 "state": "online", 00:17:51.286 "raid_level": "raid5f", 00:17:51.286 "superblock": false, 00:17:51.286 "num_base_bdevs": 3, 00:17:51.286 "num_base_bdevs_discovered": 3, 00:17:51.286 "num_base_bdevs_operational": 3, 00:17:51.286 "base_bdevs_list": [ 00:17:51.286 { 00:17:51.286 "name": "BaseBdev1", 00:17:51.286 "uuid": "d73e70ac-87b9-448f-aa55-41c1365f2008", 00:17:51.286 "is_configured": true, 00:17:51.286 "data_offset": 0, 00:17:51.286 "data_size": 65536 00:17:51.286 }, 00:17:51.286 { 00:17:51.286 "name": "BaseBdev2", 00:17:51.286 "uuid": "e846419b-5364-4f17-acbc-ebeae7669241", 00:17:51.286 "is_configured": true, 00:17:51.286 "data_offset": 0, 00:17:51.286 "data_size": 65536 00:17:51.286 }, 00:17:51.286 { 00:17:51.286 "name": "BaseBdev3", 00:17:51.286 "uuid": "d724572f-7cdf-45aa-b177-a448f8fd9f85", 00:17:51.286 "is_configured": true, 00:17:51.286 "data_offset": 0, 00:17:51.286 "data_size": 65536 00:17:51.286 } 00:17:51.286 ] 00:17:51.286 }' 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.286 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.856 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:51.856 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:51.856 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:51.856 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:51.856 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:51.856 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:51.856 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:51.856 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.856 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:51.856 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.856 [2024-12-06 15:44:34.921916] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.856 15:44:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.856 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:51.856 "name": "Existed_Raid", 00:17:51.856 "aliases": [ 00:17:51.856 "f91062f1-c1d1-410e-a080-505556ffe138" 00:17:51.856 ], 00:17:51.856 "product_name": "Raid Volume", 00:17:51.856 "block_size": 512, 00:17:51.856 "num_blocks": 131072, 00:17:51.856 "uuid": "f91062f1-c1d1-410e-a080-505556ffe138", 00:17:51.856 "assigned_rate_limits": { 00:17:51.856 "rw_ios_per_sec": 0, 00:17:51.856 "rw_mbytes_per_sec": 0, 00:17:51.856 "r_mbytes_per_sec": 0, 00:17:51.856 "w_mbytes_per_sec": 0 00:17:51.856 }, 00:17:51.856 "claimed": false, 00:17:51.856 "zoned": false, 00:17:51.856 "supported_io_types": { 00:17:51.856 "read": true, 00:17:51.856 "write": true, 00:17:51.856 "unmap": false, 00:17:51.857 "flush": false, 00:17:51.857 "reset": true, 00:17:51.857 "nvme_admin": false, 00:17:51.857 "nvme_io": false, 00:17:51.857 "nvme_io_md": false, 00:17:51.857 "write_zeroes": true, 00:17:51.857 "zcopy": false, 00:17:51.857 "get_zone_info": false, 00:17:51.857 "zone_management": false, 00:17:51.857 "zone_append": false, 00:17:51.857 "compare": false, 00:17:51.857 "compare_and_write": false, 00:17:51.857 "abort": false, 00:17:51.857 "seek_hole": false, 00:17:51.857 "seek_data": false, 00:17:51.857 "copy": false, 00:17:51.857 "nvme_iov_md": false 00:17:51.857 }, 00:17:51.857 "driver_specific": { 00:17:51.857 "raid": { 00:17:51.857 "uuid": "f91062f1-c1d1-410e-a080-505556ffe138", 00:17:51.857 "strip_size_kb": 64, 00:17:51.857 "state": "online", 00:17:51.857 "raid_level": "raid5f", 00:17:51.857 "superblock": false, 00:17:51.857 "num_base_bdevs": 3, 00:17:51.857 "num_base_bdevs_discovered": 3, 00:17:51.857 "num_base_bdevs_operational": 3, 00:17:51.857 "base_bdevs_list": [ 00:17:51.857 { 00:17:51.857 "name": "BaseBdev1", 00:17:51.857 "uuid": "d73e70ac-87b9-448f-aa55-41c1365f2008", 00:17:51.857 "is_configured": true, 00:17:51.857 "data_offset": 0, 00:17:51.857 "data_size": 65536 00:17:51.857 }, 00:17:51.857 { 00:17:51.857 "name": "BaseBdev2", 00:17:51.857 "uuid": "e846419b-5364-4f17-acbc-ebeae7669241", 00:17:51.857 "is_configured": true, 00:17:51.857 "data_offset": 0, 00:17:51.857 "data_size": 65536 00:17:51.857 }, 00:17:51.857 { 00:17:51.857 "name": "BaseBdev3", 00:17:51.857 "uuid": "d724572f-7cdf-45aa-b177-a448f8fd9f85", 00:17:51.857 "is_configured": true, 00:17:51.857 "data_offset": 0, 00:17:51.857 "data_size": 65536 00:17:51.857 } 00:17:51.857 ] 00:17:51.857 } 00:17:51.857 } 00:17:51.857 }' 00:17:51.857 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:51.857 15:44:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:51.857 BaseBdev2 00:17:51.857 BaseBdev3' 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.857 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.116 [2024-12-06 15:44:35.189429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.116 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.116 "name": "Existed_Raid", 00:17:52.116 "uuid": "f91062f1-c1d1-410e-a080-505556ffe138", 00:17:52.116 "strip_size_kb": 64, 00:17:52.116 "state": "online", 00:17:52.116 "raid_level": "raid5f", 00:17:52.116 "superblock": false, 00:17:52.116 "num_base_bdevs": 3, 00:17:52.116 "num_base_bdevs_discovered": 2, 00:17:52.116 "num_base_bdevs_operational": 2, 00:17:52.116 "base_bdevs_list": [ 00:17:52.116 { 00:17:52.116 "name": null, 00:17:52.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.117 "is_configured": false, 00:17:52.117 "data_offset": 0, 00:17:52.117 "data_size": 65536 00:17:52.117 }, 00:17:52.117 { 00:17:52.117 "name": "BaseBdev2", 00:17:52.117 "uuid": "e846419b-5364-4f17-acbc-ebeae7669241", 00:17:52.117 "is_configured": true, 00:17:52.117 "data_offset": 0, 00:17:52.117 "data_size": 65536 00:17:52.117 }, 00:17:52.117 { 00:17:52.117 "name": "BaseBdev3", 00:17:52.117 "uuid": "d724572f-7cdf-45aa-b177-a448f8fd9f85", 00:17:52.117 "is_configured": true, 00:17:52.117 "data_offset": 0, 00:17:52.117 "data_size": 65536 00:17:52.117 } 00:17:52.117 ] 00:17:52.117 }' 00:17:52.117 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.117 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.727 [2024-12-06 15:44:35.770587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:52.727 [2024-12-06 15:44:35.770713] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.727 [2024-12-06 15:44:35.878604] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.727 15:44:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.727 [2024-12-06 15:44:35.934598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:52.727 [2024-12-06 15:44:35.934666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:52.986 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.986 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:52.986 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:52.986 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.986 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.986 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.986 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:52.986 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.986 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:52.986 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:52.986 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:52.986 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:52.986 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:52.986 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:52.986 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.987 BaseBdev2 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.987 [ 00:17:52.987 { 00:17:52.987 "name": "BaseBdev2", 00:17:52.987 "aliases": [ 00:17:52.987 "296019fc-0bb7-4bf4-ae66-b1d5cd48b867" 00:17:52.987 ], 00:17:52.987 "product_name": "Malloc disk", 00:17:52.987 "block_size": 512, 00:17:52.987 "num_blocks": 65536, 00:17:52.987 "uuid": "296019fc-0bb7-4bf4-ae66-b1d5cd48b867", 00:17:52.987 "assigned_rate_limits": { 00:17:52.987 "rw_ios_per_sec": 0, 00:17:52.987 "rw_mbytes_per_sec": 0, 00:17:52.987 "r_mbytes_per_sec": 0, 00:17:52.987 "w_mbytes_per_sec": 0 00:17:52.987 }, 00:17:52.987 "claimed": false, 00:17:52.987 "zoned": false, 00:17:52.987 "supported_io_types": { 00:17:52.987 "read": true, 00:17:52.987 "write": true, 00:17:52.987 "unmap": true, 00:17:52.987 "flush": true, 00:17:52.987 "reset": true, 00:17:52.987 "nvme_admin": false, 00:17:52.987 "nvme_io": false, 00:17:52.987 "nvme_io_md": false, 00:17:52.987 "write_zeroes": true, 00:17:52.987 "zcopy": true, 00:17:52.987 "get_zone_info": false, 00:17:52.987 "zone_management": false, 00:17:52.987 "zone_append": false, 00:17:52.987 "compare": false, 00:17:52.987 "compare_and_write": false, 00:17:52.987 "abort": true, 00:17:52.987 "seek_hole": false, 00:17:52.987 "seek_data": false, 00:17:52.987 "copy": true, 00:17:52.987 "nvme_iov_md": false 00:17:52.987 }, 00:17:52.987 "memory_domains": [ 00:17:52.987 { 00:17:52.987 "dma_device_id": "system", 00:17:52.987 "dma_device_type": 1 00:17:52.987 }, 00:17:52.987 { 00:17:52.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.987 "dma_device_type": 2 00:17:52.987 } 00:17:52.987 ], 00:17:52.987 "driver_specific": {} 00:17:52.987 } 00:17:52.987 ] 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.987 BaseBdev3 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.987 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.987 [ 00:17:52.987 { 00:17:52.987 "name": "BaseBdev3", 00:17:52.987 "aliases": [ 00:17:52.987 "fdf3fd17-220f-49f7-b620-2cc60748c8ae" 00:17:52.987 ], 00:17:52.987 "product_name": "Malloc disk", 00:17:52.987 "block_size": 512, 00:17:52.987 "num_blocks": 65536, 00:17:52.987 "uuid": "fdf3fd17-220f-49f7-b620-2cc60748c8ae", 00:17:52.987 "assigned_rate_limits": { 00:17:52.987 "rw_ios_per_sec": 0, 00:17:52.987 "rw_mbytes_per_sec": 0, 00:17:52.987 "r_mbytes_per_sec": 0, 00:17:52.987 "w_mbytes_per_sec": 0 00:17:52.987 }, 00:17:52.987 "claimed": false, 00:17:52.987 "zoned": false, 00:17:53.246 "supported_io_types": { 00:17:53.246 "read": true, 00:17:53.246 "write": true, 00:17:53.246 "unmap": true, 00:17:53.246 "flush": true, 00:17:53.246 "reset": true, 00:17:53.246 "nvme_admin": false, 00:17:53.246 "nvme_io": false, 00:17:53.246 "nvme_io_md": false, 00:17:53.246 "write_zeroes": true, 00:17:53.246 "zcopy": true, 00:17:53.246 "get_zone_info": false, 00:17:53.246 "zone_management": false, 00:17:53.246 "zone_append": false, 00:17:53.246 "compare": false, 00:17:53.246 "compare_and_write": false, 00:17:53.246 "abort": true, 00:17:53.246 "seek_hole": false, 00:17:53.246 "seek_data": false, 00:17:53.246 "copy": true, 00:17:53.246 "nvme_iov_md": false 00:17:53.246 }, 00:17:53.246 "memory_domains": [ 00:17:53.246 { 00:17:53.246 "dma_device_id": "system", 00:17:53.246 "dma_device_type": 1 00:17:53.246 }, 00:17:53.246 { 00:17:53.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.246 "dma_device_type": 2 00:17:53.246 } 00:17:53.246 ], 00:17:53.246 "driver_specific": {} 00:17:53.246 } 00:17:53.246 ] 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.246 [2024-12-06 15:44:36.302215] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:53.246 [2024-12-06 15:44:36.302402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:53.246 [2024-12-06 15:44:36.302449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:53.246 [2024-12-06 15:44:36.304942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.246 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.246 "name": "Existed_Raid", 00:17:53.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.246 "strip_size_kb": 64, 00:17:53.246 "state": "configuring", 00:17:53.246 "raid_level": "raid5f", 00:17:53.246 "superblock": false, 00:17:53.246 "num_base_bdevs": 3, 00:17:53.246 "num_base_bdevs_discovered": 2, 00:17:53.246 "num_base_bdevs_operational": 3, 00:17:53.246 "base_bdevs_list": [ 00:17:53.246 { 00:17:53.246 "name": "BaseBdev1", 00:17:53.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.246 "is_configured": false, 00:17:53.246 "data_offset": 0, 00:17:53.246 "data_size": 0 00:17:53.246 }, 00:17:53.246 { 00:17:53.246 "name": "BaseBdev2", 00:17:53.246 "uuid": "296019fc-0bb7-4bf4-ae66-b1d5cd48b867", 00:17:53.246 "is_configured": true, 00:17:53.246 "data_offset": 0, 00:17:53.246 "data_size": 65536 00:17:53.247 }, 00:17:53.247 { 00:17:53.247 "name": "BaseBdev3", 00:17:53.247 "uuid": "fdf3fd17-220f-49f7-b620-2cc60748c8ae", 00:17:53.247 "is_configured": true, 00:17:53.247 "data_offset": 0, 00:17:53.247 "data_size": 65536 00:17:53.247 } 00:17:53.247 ] 00:17:53.247 }' 00:17:53.247 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.247 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.505 [2024-12-06 15:44:36.710154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.505 "name": "Existed_Raid", 00:17:53.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.505 "strip_size_kb": 64, 00:17:53.505 "state": "configuring", 00:17:53.505 "raid_level": "raid5f", 00:17:53.505 "superblock": false, 00:17:53.505 "num_base_bdevs": 3, 00:17:53.505 "num_base_bdevs_discovered": 1, 00:17:53.505 "num_base_bdevs_operational": 3, 00:17:53.505 "base_bdevs_list": [ 00:17:53.505 { 00:17:53.505 "name": "BaseBdev1", 00:17:53.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.505 "is_configured": false, 00:17:53.505 "data_offset": 0, 00:17:53.505 "data_size": 0 00:17:53.505 }, 00:17:53.505 { 00:17:53.505 "name": null, 00:17:53.505 "uuid": "296019fc-0bb7-4bf4-ae66-b1d5cd48b867", 00:17:53.505 "is_configured": false, 00:17:53.505 "data_offset": 0, 00:17:53.505 "data_size": 65536 00:17:53.505 }, 00:17:53.505 { 00:17:53.505 "name": "BaseBdev3", 00:17:53.505 "uuid": "fdf3fd17-220f-49f7-b620-2cc60748c8ae", 00:17:53.505 "is_configured": true, 00:17:53.505 "data_offset": 0, 00:17:53.505 "data_size": 65536 00:17:53.505 } 00:17:53.505 ] 00:17:53.505 }' 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.505 15:44:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.073 [2024-12-06 15:44:37.234099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.073 BaseBdev1 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:54.073 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.074 [ 00:17:54.074 { 00:17:54.074 "name": "BaseBdev1", 00:17:54.074 "aliases": [ 00:17:54.074 "5b65b1d1-722b-4e24-9e85-7d7e64f17d65" 00:17:54.074 ], 00:17:54.074 "product_name": "Malloc disk", 00:17:54.074 "block_size": 512, 00:17:54.074 "num_blocks": 65536, 00:17:54.074 "uuid": "5b65b1d1-722b-4e24-9e85-7d7e64f17d65", 00:17:54.074 "assigned_rate_limits": { 00:17:54.074 "rw_ios_per_sec": 0, 00:17:54.074 "rw_mbytes_per_sec": 0, 00:17:54.074 "r_mbytes_per_sec": 0, 00:17:54.074 "w_mbytes_per_sec": 0 00:17:54.074 }, 00:17:54.074 "claimed": true, 00:17:54.074 "claim_type": "exclusive_write", 00:17:54.074 "zoned": false, 00:17:54.074 "supported_io_types": { 00:17:54.074 "read": true, 00:17:54.074 "write": true, 00:17:54.074 "unmap": true, 00:17:54.074 "flush": true, 00:17:54.074 "reset": true, 00:17:54.074 "nvme_admin": false, 00:17:54.074 "nvme_io": false, 00:17:54.074 "nvme_io_md": false, 00:17:54.074 "write_zeroes": true, 00:17:54.074 "zcopy": true, 00:17:54.074 "get_zone_info": false, 00:17:54.074 "zone_management": false, 00:17:54.074 "zone_append": false, 00:17:54.074 "compare": false, 00:17:54.074 "compare_and_write": false, 00:17:54.074 "abort": true, 00:17:54.074 "seek_hole": false, 00:17:54.074 "seek_data": false, 00:17:54.074 "copy": true, 00:17:54.074 "nvme_iov_md": false 00:17:54.074 }, 00:17:54.074 "memory_domains": [ 00:17:54.074 { 00:17:54.074 "dma_device_id": "system", 00:17:54.074 "dma_device_type": 1 00:17:54.074 }, 00:17:54.074 { 00:17:54.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.074 "dma_device_type": 2 00:17:54.074 } 00:17:54.074 ], 00:17:54.074 "driver_specific": {} 00:17:54.074 } 00:17:54.074 ] 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.074 "name": "Existed_Raid", 00:17:54.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.074 "strip_size_kb": 64, 00:17:54.074 "state": "configuring", 00:17:54.074 "raid_level": "raid5f", 00:17:54.074 "superblock": false, 00:17:54.074 "num_base_bdevs": 3, 00:17:54.074 "num_base_bdevs_discovered": 2, 00:17:54.074 "num_base_bdevs_operational": 3, 00:17:54.074 "base_bdevs_list": [ 00:17:54.074 { 00:17:54.074 "name": "BaseBdev1", 00:17:54.074 "uuid": "5b65b1d1-722b-4e24-9e85-7d7e64f17d65", 00:17:54.074 "is_configured": true, 00:17:54.074 "data_offset": 0, 00:17:54.074 "data_size": 65536 00:17:54.074 }, 00:17:54.074 { 00:17:54.074 "name": null, 00:17:54.074 "uuid": "296019fc-0bb7-4bf4-ae66-b1d5cd48b867", 00:17:54.074 "is_configured": false, 00:17:54.074 "data_offset": 0, 00:17:54.074 "data_size": 65536 00:17:54.074 }, 00:17:54.074 { 00:17:54.074 "name": "BaseBdev3", 00:17:54.074 "uuid": "fdf3fd17-220f-49f7-b620-2cc60748c8ae", 00:17:54.074 "is_configured": true, 00:17:54.074 "data_offset": 0, 00:17:54.074 "data_size": 65536 00:17:54.074 } 00:17:54.074 ] 00:17:54.074 }' 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.074 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.643 [2024-12-06 15:44:37.729491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.643 "name": "Existed_Raid", 00:17:54.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.643 "strip_size_kb": 64, 00:17:54.643 "state": "configuring", 00:17:54.643 "raid_level": "raid5f", 00:17:54.643 "superblock": false, 00:17:54.643 "num_base_bdevs": 3, 00:17:54.643 "num_base_bdevs_discovered": 1, 00:17:54.643 "num_base_bdevs_operational": 3, 00:17:54.643 "base_bdevs_list": [ 00:17:54.643 { 00:17:54.643 "name": "BaseBdev1", 00:17:54.643 "uuid": "5b65b1d1-722b-4e24-9e85-7d7e64f17d65", 00:17:54.643 "is_configured": true, 00:17:54.643 "data_offset": 0, 00:17:54.643 "data_size": 65536 00:17:54.643 }, 00:17:54.643 { 00:17:54.643 "name": null, 00:17:54.643 "uuid": "296019fc-0bb7-4bf4-ae66-b1d5cd48b867", 00:17:54.643 "is_configured": false, 00:17:54.643 "data_offset": 0, 00:17:54.643 "data_size": 65536 00:17:54.643 }, 00:17:54.643 { 00:17:54.643 "name": null, 00:17:54.643 "uuid": "fdf3fd17-220f-49f7-b620-2cc60748c8ae", 00:17:54.643 "is_configured": false, 00:17:54.643 "data_offset": 0, 00:17:54.643 "data_size": 65536 00:17:54.643 } 00:17:54.643 ] 00:17:54.643 }' 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.643 15:44:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.902 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.902 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.902 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.902 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:54.902 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.902 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:54.902 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:54.902 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.902 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.161 [2024-12-06 15:44:38.196876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.161 "name": "Existed_Raid", 00:17:55.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.161 "strip_size_kb": 64, 00:17:55.161 "state": "configuring", 00:17:55.161 "raid_level": "raid5f", 00:17:55.161 "superblock": false, 00:17:55.161 "num_base_bdevs": 3, 00:17:55.161 "num_base_bdevs_discovered": 2, 00:17:55.161 "num_base_bdevs_operational": 3, 00:17:55.161 "base_bdevs_list": [ 00:17:55.161 { 00:17:55.161 "name": "BaseBdev1", 00:17:55.161 "uuid": "5b65b1d1-722b-4e24-9e85-7d7e64f17d65", 00:17:55.161 "is_configured": true, 00:17:55.161 "data_offset": 0, 00:17:55.161 "data_size": 65536 00:17:55.161 }, 00:17:55.161 { 00:17:55.161 "name": null, 00:17:55.161 "uuid": "296019fc-0bb7-4bf4-ae66-b1d5cd48b867", 00:17:55.161 "is_configured": false, 00:17:55.161 "data_offset": 0, 00:17:55.161 "data_size": 65536 00:17:55.161 }, 00:17:55.161 { 00:17:55.161 "name": "BaseBdev3", 00:17:55.161 "uuid": "fdf3fd17-220f-49f7-b620-2cc60748c8ae", 00:17:55.161 "is_configured": true, 00:17:55.161 "data_offset": 0, 00:17:55.161 "data_size": 65536 00:17:55.161 } 00:17:55.161 ] 00:17:55.161 }' 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.161 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.420 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.420 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.420 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.420 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:55.420 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.420 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:55.420 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:55.420 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.420 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.420 [2024-12-06 15:44:38.652333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.679 "name": "Existed_Raid", 00:17:55.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.679 "strip_size_kb": 64, 00:17:55.679 "state": "configuring", 00:17:55.679 "raid_level": "raid5f", 00:17:55.679 "superblock": false, 00:17:55.679 "num_base_bdevs": 3, 00:17:55.679 "num_base_bdevs_discovered": 1, 00:17:55.679 "num_base_bdevs_operational": 3, 00:17:55.679 "base_bdevs_list": [ 00:17:55.679 { 00:17:55.679 "name": null, 00:17:55.679 "uuid": "5b65b1d1-722b-4e24-9e85-7d7e64f17d65", 00:17:55.679 "is_configured": false, 00:17:55.679 "data_offset": 0, 00:17:55.679 "data_size": 65536 00:17:55.679 }, 00:17:55.679 { 00:17:55.679 "name": null, 00:17:55.679 "uuid": "296019fc-0bb7-4bf4-ae66-b1d5cd48b867", 00:17:55.679 "is_configured": false, 00:17:55.679 "data_offset": 0, 00:17:55.679 "data_size": 65536 00:17:55.679 }, 00:17:55.679 { 00:17:55.679 "name": "BaseBdev3", 00:17:55.679 "uuid": "fdf3fd17-220f-49f7-b620-2cc60748c8ae", 00:17:55.679 "is_configured": true, 00:17:55.679 "data_offset": 0, 00:17:55.679 "data_size": 65536 00:17:55.679 } 00:17:55.679 ] 00:17:55.679 }' 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.679 15:44:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.937 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.937 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:55.937 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.937 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.195 [2024-12-06 15:44:39.257190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.195 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.196 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.196 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.196 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.196 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.196 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.196 "name": "Existed_Raid", 00:17:56.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.196 "strip_size_kb": 64, 00:17:56.196 "state": "configuring", 00:17:56.196 "raid_level": "raid5f", 00:17:56.196 "superblock": false, 00:17:56.196 "num_base_bdevs": 3, 00:17:56.196 "num_base_bdevs_discovered": 2, 00:17:56.196 "num_base_bdevs_operational": 3, 00:17:56.196 "base_bdevs_list": [ 00:17:56.196 { 00:17:56.196 "name": null, 00:17:56.196 "uuid": "5b65b1d1-722b-4e24-9e85-7d7e64f17d65", 00:17:56.196 "is_configured": false, 00:17:56.196 "data_offset": 0, 00:17:56.196 "data_size": 65536 00:17:56.196 }, 00:17:56.196 { 00:17:56.196 "name": "BaseBdev2", 00:17:56.196 "uuid": "296019fc-0bb7-4bf4-ae66-b1d5cd48b867", 00:17:56.196 "is_configured": true, 00:17:56.196 "data_offset": 0, 00:17:56.196 "data_size": 65536 00:17:56.196 }, 00:17:56.196 { 00:17:56.196 "name": "BaseBdev3", 00:17:56.196 "uuid": "fdf3fd17-220f-49f7-b620-2cc60748c8ae", 00:17:56.196 "is_configured": true, 00:17:56.196 "data_offset": 0, 00:17:56.196 "data_size": 65536 00:17:56.196 } 00:17:56.196 ] 00:17:56.196 }' 00:17:56.196 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.196 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.454 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.454 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.454 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.454 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:56.454 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.454 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:56.454 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.454 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.454 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.454 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5b65b1d1-722b-4e24-9e85-7d7e64f17d65 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.712 [2024-12-06 15:44:39.829059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:56.712 [2024-12-06 15:44:39.829123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:56.712 [2024-12-06 15:44:39.829136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:56.712 [2024-12-06 15:44:39.829424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:56.712 [2024-12-06 15:44:39.835024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:56.712 [2024-12-06 15:44:39.835050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:56.712 [2024-12-06 15:44:39.835370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.712 NewBaseBdev 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.712 [ 00:17:56.712 { 00:17:56.712 "name": "NewBaseBdev", 00:17:56.712 "aliases": [ 00:17:56.712 "5b65b1d1-722b-4e24-9e85-7d7e64f17d65" 00:17:56.712 ], 00:17:56.712 "product_name": "Malloc disk", 00:17:56.712 "block_size": 512, 00:17:56.712 "num_blocks": 65536, 00:17:56.712 "uuid": "5b65b1d1-722b-4e24-9e85-7d7e64f17d65", 00:17:56.712 "assigned_rate_limits": { 00:17:56.712 "rw_ios_per_sec": 0, 00:17:56.712 "rw_mbytes_per_sec": 0, 00:17:56.712 "r_mbytes_per_sec": 0, 00:17:56.712 "w_mbytes_per_sec": 0 00:17:56.712 }, 00:17:56.712 "claimed": true, 00:17:56.712 "claim_type": "exclusive_write", 00:17:56.712 "zoned": false, 00:17:56.712 "supported_io_types": { 00:17:56.712 "read": true, 00:17:56.712 "write": true, 00:17:56.712 "unmap": true, 00:17:56.712 "flush": true, 00:17:56.712 "reset": true, 00:17:56.712 "nvme_admin": false, 00:17:56.712 "nvme_io": false, 00:17:56.712 "nvme_io_md": false, 00:17:56.712 "write_zeroes": true, 00:17:56.712 "zcopy": true, 00:17:56.712 "get_zone_info": false, 00:17:56.712 "zone_management": false, 00:17:56.712 "zone_append": false, 00:17:56.712 "compare": false, 00:17:56.712 "compare_and_write": false, 00:17:56.712 "abort": true, 00:17:56.712 "seek_hole": false, 00:17:56.712 "seek_data": false, 00:17:56.712 "copy": true, 00:17:56.712 "nvme_iov_md": false 00:17:56.712 }, 00:17:56.712 "memory_domains": [ 00:17:56.712 { 00:17:56.712 "dma_device_id": "system", 00:17:56.712 "dma_device_type": 1 00:17:56.712 }, 00:17:56.712 { 00:17:56.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.712 "dma_device_type": 2 00:17:56.712 } 00:17:56.712 ], 00:17:56.712 "driver_specific": {} 00:17:56.712 } 00:17:56.712 ] 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.712 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.713 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:56.713 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.713 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.713 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.713 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.713 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.713 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.713 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.713 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.713 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.713 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.713 "name": "Existed_Raid", 00:17:56.713 "uuid": "66a85e50-425f-49d6-abe9-1c7990c0af7f", 00:17:56.713 "strip_size_kb": 64, 00:17:56.713 "state": "online", 00:17:56.713 "raid_level": "raid5f", 00:17:56.713 "superblock": false, 00:17:56.713 "num_base_bdevs": 3, 00:17:56.713 "num_base_bdevs_discovered": 3, 00:17:56.713 "num_base_bdevs_operational": 3, 00:17:56.713 "base_bdevs_list": [ 00:17:56.713 { 00:17:56.713 "name": "NewBaseBdev", 00:17:56.713 "uuid": "5b65b1d1-722b-4e24-9e85-7d7e64f17d65", 00:17:56.713 "is_configured": true, 00:17:56.713 "data_offset": 0, 00:17:56.713 "data_size": 65536 00:17:56.713 }, 00:17:56.713 { 00:17:56.713 "name": "BaseBdev2", 00:17:56.713 "uuid": "296019fc-0bb7-4bf4-ae66-b1d5cd48b867", 00:17:56.713 "is_configured": true, 00:17:56.713 "data_offset": 0, 00:17:56.713 "data_size": 65536 00:17:56.713 }, 00:17:56.713 { 00:17:56.713 "name": "BaseBdev3", 00:17:56.713 "uuid": "fdf3fd17-220f-49f7-b620-2cc60748c8ae", 00:17:56.713 "is_configured": true, 00:17:56.713 "data_offset": 0, 00:17:56.713 "data_size": 65536 00:17:56.713 } 00:17:56.713 ] 00:17:56.713 }' 00:17:56.713 15:44:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.713 15:44:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.281 [2024-12-06 15:44:40.302446] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:57.281 "name": "Existed_Raid", 00:17:57.281 "aliases": [ 00:17:57.281 "66a85e50-425f-49d6-abe9-1c7990c0af7f" 00:17:57.281 ], 00:17:57.281 "product_name": "Raid Volume", 00:17:57.281 "block_size": 512, 00:17:57.281 "num_blocks": 131072, 00:17:57.281 "uuid": "66a85e50-425f-49d6-abe9-1c7990c0af7f", 00:17:57.281 "assigned_rate_limits": { 00:17:57.281 "rw_ios_per_sec": 0, 00:17:57.281 "rw_mbytes_per_sec": 0, 00:17:57.281 "r_mbytes_per_sec": 0, 00:17:57.281 "w_mbytes_per_sec": 0 00:17:57.281 }, 00:17:57.281 "claimed": false, 00:17:57.281 "zoned": false, 00:17:57.281 "supported_io_types": { 00:17:57.281 "read": true, 00:17:57.281 "write": true, 00:17:57.281 "unmap": false, 00:17:57.281 "flush": false, 00:17:57.281 "reset": true, 00:17:57.281 "nvme_admin": false, 00:17:57.281 "nvme_io": false, 00:17:57.281 "nvme_io_md": false, 00:17:57.281 "write_zeroes": true, 00:17:57.281 "zcopy": false, 00:17:57.281 "get_zone_info": false, 00:17:57.281 "zone_management": false, 00:17:57.281 "zone_append": false, 00:17:57.281 "compare": false, 00:17:57.281 "compare_and_write": false, 00:17:57.281 "abort": false, 00:17:57.281 "seek_hole": false, 00:17:57.281 "seek_data": false, 00:17:57.281 "copy": false, 00:17:57.281 "nvme_iov_md": false 00:17:57.281 }, 00:17:57.281 "driver_specific": { 00:17:57.281 "raid": { 00:17:57.281 "uuid": "66a85e50-425f-49d6-abe9-1c7990c0af7f", 00:17:57.281 "strip_size_kb": 64, 00:17:57.281 "state": "online", 00:17:57.281 "raid_level": "raid5f", 00:17:57.281 "superblock": false, 00:17:57.281 "num_base_bdevs": 3, 00:17:57.281 "num_base_bdevs_discovered": 3, 00:17:57.281 "num_base_bdevs_operational": 3, 00:17:57.281 "base_bdevs_list": [ 00:17:57.281 { 00:17:57.281 "name": "NewBaseBdev", 00:17:57.281 "uuid": "5b65b1d1-722b-4e24-9e85-7d7e64f17d65", 00:17:57.281 "is_configured": true, 00:17:57.281 "data_offset": 0, 00:17:57.281 "data_size": 65536 00:17:57.281 }, 00:17:57.281 { 00:17:57.281 "name": "BaseBdev2", 00:17:57.281 "uuid": "296019fc-0bb7-4bf4-ae66-b1d5cd48b867", 00:17:57.281 "is_configured": true, 00:17:57.281 "data_offset": 0, 00:17:57.281 "data_size": 65536 00:17:57.281 }, 00:17:57.281 { 00:17:57.281 "name": "BaseBdev3", 00:17:57.281 "uuid": "fdf3fd17-220f-49f7-b620-2cc60748c8ae", 00:17:57.281 "is_configured": true, 00:17:57.281 "data_offset": 0, 00:17:57.281 "data_size": 65536 00:17:57.281 } 00:17:57.281 ] 00:17:57.281 } 00:17:57.281 } 00:17:57.281 }' 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:57.281 BaseBdev2 00:17:57.281 BaseBdev3' 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.281 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.282 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.541 [2024-12-06 15:44:40.574209] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:57.541 [2024-12-06 15:44:40.574245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.541 [2024-12-06 15:44:40.574353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.541 [2024-12-06 15:44:40.574711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.541 [2024-12-06 15:44:40.574731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:57.541 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.541 15:44:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79904 00:17:57.541 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79904 ']' 00:17:57.541 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79904 00:17:57.541 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:57.541 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.541 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79904 00:17:57.541 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.541 killing process with pid 79904 00:17:57.541 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.541 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79904' 00:17:57.541 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79904 00:17:57.541 [2024-12-06 15:44:40.626996] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:57.541 15:44:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79904 00:17:57.801 [2024-12-06 15:44:40.963003] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:59.181 00:17:59.181 real 0m10.637s 00:17:59.181 user 0m16.484s 00:17:59.181 sys 0m2.337s 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.181 ************************************ 00:17:59.181 END TEST raid5f_state_function_test 00:17:59.181 ************************************ 00:17:59.181 15:44:42 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:17:59.181 15:44:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:59.181 15:44:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.181 15:44:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.181 ************************************ 00:17:59.181 START TEST raid5f_state_function_test_sb 00:17:59.181 ************************************ 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:59.181 Process raid pid: 80529 00:17:59.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80529 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80529' 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80529 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80529 ']' 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.181 15:44:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.181 [2024-12-06 15:44:42.407234] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:17:59.181 [2024-12-06 15:44:42.407618] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.441 [2024-12-06 15:44:42.595209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.700 [2024-12-06 15:44:42.741338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.700 [2024-12-06 15:44:42.989945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.700 [2024-12-06 15:44:42.990138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.960 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.960 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:59.960 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:59.960 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.960 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.220 [2024-12-06 15:44:43.254892] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:00.220 [2024-12-06 15:44:43.254966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:00.220 [2024-12-06 15:44:43.254986] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:00.220 [2024-12-06 15:44:43.255000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:00.220 [2024-12-06 15:44:43.255008] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:00.220 [2024-12-06 15:44:43.255022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.220 "name": "Existed_Raid", 00:18:00.220 "uuid": "26c6e729-e521-4128-ba2d-59912492b6fc", 00:18:00.220 "strip_size_kb": 64, 00:18:00.220 "state": "configuring", 00:18:00.220 "raid_level": "raid5f", 00:18:00.220 "superblock": true, 00:18:00.220 "num_base_bdevs": 3, 00:18:00.220 "num_base_bdevs_discovered": 0, 00:18:00.220 "num_base_bdevs_operational": 3, 00:18:00.220 "base_bdevs_list": [ 00:18:00.220 { 00:18:00.220 "name": "BaseBdev1", 00:18:00.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.220 "is_configured": false, 00:18:00.220 "data_offset": 0, 00:18:00.220 "data_size": 0 00:18:00.220 }, 00:18:00.220 { 00:18:00.220 "name": "BaseBdev2", 00:18:00.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.220 "is_configured": false, 00:18:00.220 "data_offset": 0, 00:18:00.220 "data_size": 0 00:18:00.220 }, 00:18:00.220 { 00:18:00.220 "name": "BaseBdev3", 00:18:00.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.220 "is_configured": false, 00:18:00.220 "data_offset": 0, 00:18:00.220 "data_size": 0 00:18:00.220 } 00:18:00.220 ] 00:18:00.220 }' 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.220 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.479 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:00.479 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.480 [2024-12-06 15:44:43.626715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:00.480 [2024-12-06 15:44:43.626767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.480 [2024-12-06 15:44:43.638708] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:00.480 [2024-12-06 15:44:43.638767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:00.480 [2024-12-06 15:44:43.638778] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:00.480 [2024-12-06 15:44:43.638792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:00.480 [2024-12-06 15:44:43.638800] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:00.480 [2024-12-06 15:44:43.638812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.480 [2024-12-06 15:44:43.694923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:00.480 BaseBdev1 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.480 [ 00:18:00.480 { 00:18:00.480 "name": "BaseBdev1", 00:18:00.480 "aliases": [ 00:18:00.480 "6e717bdd-f1ea-4a25-b16f-4d4d7e36b9cb" 00:18:00.480 ], 00:18:00.480 "product_name": "Malloc disk", 00:18:00.480 "block_size": 512, 00:18:00.480 "num_blocks": 65536, 00:18:00.480 "uuid": "6e717bdd-f1ea-4a25-b16f-4d4d7e36b9cb", 00:18:00.480 "assigned_rate_limits": { 00:18:00.480 "rw_ios_per_sec": 0, 00:18:00.480 "rw_mbytes_per_sec": 0, 00:18:00.480 "r_mbytes_per_sec": 0, 00:18:00.480 "w_mbytes_per_sec": 0 00:18:00.480 }, 00:18:00.480 "claimed": true, 00:18:00.480 "claim_type": "exclusive_write", 00:18:00.480 "zoned": false, 00:18:00.480 "supported_io_types": { 00:18:00.480 "read": true, 00:18:00.480 "write": true, 00:18:00.480 "unmap": true, 00:18:00.480 "flush": true, 00:18:00.480 "reset": true, 00:18:00.480 "nvme_admin": false, 00:18:00.480 "nvme_io": false, 00:18:00.480 "nvme_io_md": false, 00:18:00.480 "write_zeroes": true, 00:18:00.480 "zcopy": true, 00:18:00.480 "get_zone_info": false, 00:18:00.480 "zone_management": false, 00:18:00.480 "zone_append": false, 00:18:00.480 "compare": false, 00:18:00.480 "compare_and_write": false, 00:18:00.480 "abort": true, 00:18:00.480 "seek_hole": false, 00:18:00.480 "seek_data": false, 00:18:00.480 "copy": true, 00:18:00.480 "nvme_iov_md": false 00:18:00.480 }, 00:18:00.480 "memory_domains": [ 00:18:00.480 { 00:18:00.480 "dma_device_id": "system", 00:18:00.480 "dma_device_type": 1 00:18:00.480 }, 00:18:00.480 { 00:18:00.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.480 "dma_device_type": 2 00:18:00.480 } 00:18:00.480 ], 00:18:00.480 "driver_specific": {} 00:18:00.480 } 00:18:00.480 ] 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.480 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.740 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.740 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.740 "name": "Existed_Raid", 00:18:00.740 "uuid": "ca08a7d8-7d44-4141-a58c-f6852d90faec", 00:18:00.740 "strip_size_kb": 64, 00:18:00.740 "state": "configuring", 00:18:00.740 "raid_level": "raid5f", 00:18:00.740 "superblock": true, 00:18:00.740 "num_base_bdevs": 3, 00:18:00.740 "num_base_bdevs_discovered": 1, 00:18:00.740 "num_base_bdevs_operational": 3, 00:18:00.740 "base_bdevs_list": [ 00:18:00.740 { 00:18:00.740 "name": "BaseBdev1", 00:18:00.740 "uuid": "6e717bdd-f1ea-4a25-b16f-4d4d7e36b9cb", 00:18:00.740 "is_configured": true, 00:18:00.740 "data_offset": 2048, 00:18:00.740 "data_size": 63488 00:18:00.740 }, 00:18:00.740 { 00:18:00.740 "name": "BaseBdev2", 00:18:00.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.740 "is_configured": false, 00:18:00.740 "data_offset": 0, 00:18:00.740 "data_size": 0 00:18:00.740 }, 00:18:00.740 { 00:18:00.740 "name": "BaseBdev3", 00:18:00.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.740 "is_configured": false, 00:18:00.740 "data_offset": 0, 00:18:00.740 "data_size": 0 00:18:00.740 } 00:18:00.740 ] 00:18:00.740 }' 00:18:00.740 15:44:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.740 15:44:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.000 [2024-12-06 15:44:44.142391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:01.000 [2024-12-06 15:44:44.142464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.000 [2024-12-06 15:44:44.150420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.000 [2024-12-06 15:44:44.152843] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.000 [2024-12-06 15:44:44.152894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.000 [2024-12-06 15:44:44.152906] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:01.000 [2024-12-06 15:44:44.152919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.000 "name": "Existed_Raid", 00:18:01.000 "uuid": "4b362402-3de5-4f91-8b0b-65a041fbb305", 00:18:01.000 "strip_size_kb": 64, 00:18:01.000 "state": "configuring", 00:18:01.000 "raid_level": "raid5f", 00:18:01.000 "superblock": true, 00:18:01.000 "num_base_bdevs": 3, 00:18:01.000 "num_base_bdevs_discovered": 1, 00:18:01.000 "num_base_bdevs_operational": 3, 00:18:01.000 "base_bdevs_list": [ 00:18:01.000 { 00:18:01.000 "name": "BaseBdev1", 00:18:01.000 "uuid": "6e717bdd-f1ea-4a25-b16f-4d4d7e36b9cb", 00:18:01.000 "is_configured": true, 00:18:01.000 "data_offset": 2048, 00:18:01.000 "data_size": 63488 00:18:01.000 }, 00:18:01.000 { 00:18:01.000 "name": "BaseBdev2", 00:18:01.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.000 "is_configured": false, 00:18:01.000 "data_offset": 0, 00:18:01.000 "data_size": 0 00:18:01.000 }, 00:18:01.000 { 00:18:01.000 "name": "BaseBdev3", 00:18:01.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.000 "is_configured": false, 00:18:01.000 "data_offset": 0, 00:18:01.000 "data_size": 0 00:18:01.000 } 00:18:01.000 ] 00:18:01.000 }' 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.000 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.570 [2024-12-06 15:44:44.602859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.570 BaseBdev2 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.570 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.570 [ 00:18:01.570 { 00:18:01.570 "name": "BaseBdev2", 00:18:01.570 "aliases": [ 00:18:01.570 "5eea8e92-cf79-4a9b-bb85-17441cdf0fe3" 00:18:01.570 ], 00:18:01.570 "product_name": "Malloc disk", 00:18:01.570 "block_size": 512, 00:18:01.570 "num_blocks": 65536, 00:18:01.570 "uuid": "5eea8e92-cf79-4a9b-bb85-17441cdf0fe3", 00:18:01.570 "assigned_rate_limits": { 00:18:01.570 "rw_ios_per_sec": 0, 00:18:01.570 "rw_mbytes_per_sec": 0, 00:18:01.570 "r_mbytes_per_sec": 0, 00:18:01.570 "w_mbytes_per_sec": 0 00:18:01.570 }, 00:18:01.570 "claimed": true, 00:18:01.570 "claim_type": "exclusive_write", 00:18:01.570 "zoned": false, 00:18:01.570 "supported_io_types": { 00:18:01.570 "read": true, 00:18:01.570 "write": true, 00:18:01.570 "unmap": true, 00:18:01.570 "flush": true, 00:18:01.570 "reset": true, 00:18:01.570 "nvme_admin": false, 00:18:01.570 "nvme_io": false, 00:18:01.570 "nvme_io_md": false, 00:18:01.570 "write_zeroes": true, 00:18:01.570 "zcopy": true, 00:18:01.570 "get_zone_info": false, 00:18:01.570 "zone_management": false, 00:18:01.570 "zone_append": false, 00:18:01.570 "compare": false, 00:18:01.570 "compare_and_write": false, 00:18:01.570 "abort": true, 00:18:01.570 "seek_hole": false, 00:18:01.570 "seek_data": false, 00:18:01.570 "copy": true, 00:18:01.570 "nvme_iov_md": false 00:18:01.570 }, 00:18:01.571 "memory_domains": [ 00:18:01.571 { 00:18:01.571 "dma_device_id": "system", 00:18:01.571 "dma_device_type": 1 00:18:01.571 }, 00:18:01.571 { 00:18:01.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.571 "dma_device_type": 2 00:18:01.571 } 00:18:01.571 ], 00:18:01.571 "driver_specific": {} 00:18:01.571 } 00:18:01.571 ] 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.571 "name": "Existed_Raid", 00:18:01.571 "uuid": "4b362402-3de5-4f91-8b0b-65a041fbb305", 00:18:01.571 "strip_size_kb": 64, 00:18:01.571 "state": "configuring", 00:18:01.571 "raid_level": "raid5f", 00:18:01.571 "superblock": true, 00:18:01.571 "num_base_bdevs": 3, 00:18:01.571 "num_base_bdevs_discovered": 2, 00:18:01.571 "num_base_bdevs_operational": 3, 00:18:01.571 "base_bdevs_list": [ 00:18:01.571 { 00:18:01.571 "name": "BaseBdev1", 00:18:01.571 "uuid": "6e717bdd-f1ea-4a25-b16f-4d4d7e36b9cb", 00:18:01.571 "is_configured": true, 00:18:01.571 "data_offset": 2048, 00:18:01.571 "data_size": 63488 00:18:01.571 }, 00:18:01.571 { 00:18:01.571 "name": "BaseBdev2", 00:18:01.571 "uuid": "5eea8e92-cf79-4a9b-bb85-17441cdf0fe3", 00:18:01.571 "is_configured": true, 00:18:01.571 "data_offset": 2048, 00:18:01.571 "data_size": 63488 00:18:01.571 }, 00:18:01.571 { 00:18:01.571 "name": "BaseBdev3", 00:18:01.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.571 "is_configured": false, 00:18:01.571 "data_offset": 0, 00:18:01.571 "data_size": 0 00:18:01.571 } 00:18:01.571 ] 00:18:01.571 }' 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.571 15:44:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.844 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:01.844 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.844 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.844 [2024-12-06 15:44:45.111791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:01.844 [2024-12-06 15:44:45.112099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:01.844 [2024-12-06 15:44:45.112126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:01.844 BaseBdev3 00:18:01.844 [2024-12-06 15:44:45.112450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:01.844 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.844 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:01.844 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:01.844 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:01.844 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:01.844 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:01.844 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:01.844 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:01.844 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.844 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.844 [2024-12-06 15:44:45.118437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:01.844 [2024-12-06 15:44:45.118467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:01.844 [2024-12-06 15:44:45.118781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.103 [ 00:18:02.103 { 00:18:02.103 "name": "BaseBdev3", 00:18:02.103 "aliases": [ 00:18:02.103 "e29a0f00-1e58-4acb-96c4-519465d616d6" 00:18:02.103 ], 00:18:02.103 "product_name": "Malloc disk", 00:18:02.103 "block_size": 512, 00:18:02.103 "num_blocks": 65536, 00:18:02.103 "uuid": "e29a0f00-1e58-4acb-96c4-519465d616d6", 00:18:02.103 "assigned_rate_limits": { 00:18:02.103 "rw_ios_per_sec": 0, 00:18:02.103 "rw_mbytes_per_sec": 0, 00:18:02.103 "r_mbytes_per_sec": 0, 00:18:02.103 "w_mbytes_per_sec": 0 00:18:02.103 }, 00:18:02.103 "claimed": true, 00:18:02.103 "claim_type": "exclusive_write", 00:18:02.103 "zoned": false, 00:18:02.103 "supported_io_types": { 00:18:02.103 "read": true, 00:18:02.103 "write": true, 00:18:02.103 "unmap": true, 00:18:02.103 "flush": true, 00:18:02.103 "reset": true, 00:18:02.103 "nvme_admin": false, 00:18:02.103 "nvme_io": false, 00:18:02.103 "nvme_io_md": false, 00:18:02.103 "write_zeroes": true, 00:18:02.103 "zcopy": true, 00:18:02.103 "get_zone_info": false, 00:18:02.103 "zone_management": false, 00:18:02.103 "zone_append": false, 00:18:02.103 "compare": false, 00:18:02.103 "compare_and_write": false, 00:18:02.103 "abort": true, 00:18:02.103 "seek_hole": false, 00:18:02.103 "seek_data": false, 00:18:02.103 "copy": true, 00:18:02.103 "nvme_iov_md": false 00:18:02.103 }, 00:18:02.103 "memory_domains": [ 00:18:02.103 { 00:18:02.103 "dma_device_id": "system", 00:18:02.103 "dma_device_type": 1 00:18:02.103 }, 00:18:02.103 { 00:18:02.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.103 "dma_device_type": 2 00:18:02.103 } 00:18:02.103 ], 00:18:02.103 "driver_specific": {} 00:18:02.103 } 00:18:02.103 ] 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.103 "name": "Existed_Raid", 00:18:02.103 "uuid": "4b362402-3de5-4f91-8b0b-65a041fbb305", 00:18:02.103 "strip_size_kb": 64, 00:18:02.103 "state": "online", 00:18:02.103 "raid_level": "raid5f", 00:18:02.103 "superblock": true, 00:18:02.103 "num_base_bdevs": 3, 00:18:02.103 "num_base_bdevs_discovered": 3, 00:18:02.103 "num_base_bdevs_operational": 3, 00:18:02.103 "base_bdevs_list": [ 00:18:02.103 { 00:18:02.103 "name": "BaseBdev1", 00:18:02.103 "uuid": "6e717bdd-f1ea-4a25-b16f-4d4d7e36b9cb", 00:18:02.103 "is_configured": true, 00:18:02.103 "data_offset": 2048, 00:18:02.103 "data_size": 63488 00:18:02.103 }, 00:18:02.103 { 00:18:02.103 "name": "BaseBdev2", 00:18:02.103 "uuid": "5eea8e92-cf79-4a9b-bb85-17441cdf0fe3", 00:18:02.103 "is_configured": true, 00:18:02.103 "data_offset": 2048, 00:18:02.103 "data_size": 63488 00:18:02.103 }, 00:18:02.103 { 00:18:02.103 "name": "BaseBdev3", 00:18:02.103 "uuid": "e29a0f00-1e58-4acb-96c4-519465d616d6", 00:18:02.103 "is_configured": true, 00:18:02.103 "data_offset": 2048, 00:18:02.103 "data_size": 63488 00:18:02.103 } 00:18:02.103 ] 00:18:02.103 }' 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.103 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.361 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:02.361 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:02.361 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:02.361 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:02.361 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:02.361 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:02.361 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:02.361 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.361 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.361 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:02.361 [2024-12-06 15:44:45.573386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.361 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.361 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:02.361 "name": "Existed_Raid", 00:18:02.361 "aliases": [ 00:18:02.361 "4b362402-3de5-4f91-8b0b-65a041fbb305" 00:18:02.361 ], 00:18:02.361 "product_name": "Raid Volume", 00:18:02.361 "block_size": 512, 00:18:02.361 "num_blocks": 126976, 00:18:02.361 "uuid": "4b362402-3de5-4f91-8b0b-65a041fbb305", 00:18:02.361 "assigned_rate_limits": { 00:18:02.361 "rw_ios_per_sec": 0, 00:18:02.361 "rw_mbytes_per_sec": 0, 00:18:02.361 "r_mbytes_per_sec": 0, 00:18:02.361 "w_mbytes_per_sec": 0 00:18:02.361 }, 00:18:02.361 "claimed": false, 00:18:02.361 "zoned": false, 00:18:02.361 "supported_io_types": { 00:18:02.361 "read": true, 00:18:02.361 "write": true, 00:18:02.361 "unmap": false, 00:18:02.361 "flush": false, 00:18:02.361 "reset": true, 00:18:02.361 "nvme_admin": false, 00:18:02.361 "nvme_io": false, 00:18:02.361 "nvme_io_md": false, 00:18:02.361 "write_zeroes": true, 00:18:02.361 "zcopy": false, 00:18:02.361 "get_zone_info": false, 00:18:02.361 "zone_management": false, 00:18:02.362 "zone_append": false, 00:18:02.362 "compare": false, 00:18:02.362 "compare_and_write": false, 00:18:02.362 "abort": false, 00:18:02.362 "seek_hole": false, 00:18:02.362 "seek_data": false, 00:18:02.362 "copy": false, 00:18:02.362 "nvme_iov_md": false 00:18:02.362 }, 00:18:02.362 "driver_specific": { 00:18:02.362 "raid": { 00:18:02.362 "uuid": "4b362402-3de5-4f91-8b0b-65a041fbb305", 00:18:02.362 "strip_size_kb": 64, 00:18:02.362 "state": "online", 00:18:02.362 "raid_level": "raid5f", 00:18:02.362 "superblock": true, 00:18:02.362 "num_base_bdevs": 3, 00:18:02.362 "num_base_bdevs_discovered": 3, 00:18:02.362 "num_base_bdevs_operational": 3, 00:18:02.362 "base_bdevs_list": [ 00:18:02.362 { 00:18:02.362 "name": "BaseBdev1", 00:18:02.362 "uuid": "6e717bdd-f1ea-4a25-b16f-4d4d7e36b9cb", 00:18:02.362 "is_configured": true, 00:18:02.362 "data_offset": 2048, 00:18:02.362 "data_size": 63488 00:18:02.362 }, 00:18:02.362 { 00:18:02.362 "name": "BaseBdev2", 00:18:02.362 "uuid": "5eea8e92-cf79-4a9b-bb85-17441cdf0fe3", 00:18:02.362 "is_configured": true, 00:18:02.362 "data_offset": 2048, 00:18:02.362 "data_size": 63488 00:18:02.362 }, 00:18:02.362 { 00:18:02.362 "name": "BaseBdev3", 00:18:02.362 "uuid": "e29a0f00-1e58-4acb-96c4-519465d616d6", 00:18:02.362 "is_configured": true, 00:18:02.362 "data_offset": 2048, 00:18:02.362 "data_size": 63488 00:18:02.362 } 00:18:02.362 ] 00:18:02.362 } 00:18:02.362 } 00:18:02.362 }' 00:18:02.362 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:02.362 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:02.362 BaseBdev2 00:18:02.362 BaseBdev3' 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.620 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.620 [2024-12-06 15:44:45.844823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.881 "name": "Existed_Raid", 00:18:02.881 "uuid": "4b362402-3de5-4f91-8b0b-65a041fbb305", 00:18:02.881 "strip_size_kb": 64, 00:18:02.881 "state": "online", 00:18:02.881 "raid_level": "raid5f", 00:18:02.881 "superblock": true, 00:18:02.881 "num_base_bdevs": 3, 00:18:02.881 "num_base_bdevs_discovered": 2, 00:18:02.881 "num_base_bdevs_operational": 2, 00:18:02.881 "base_bdevs_list": [ 00:18:02.881 { 00:18:02.881 "name": null, 00:18:02.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.881 "is_configured": false, 00:18:02.881 "data_offset": 0, 00:18:02.881 "data_size": 63488 00:18:02.881 }, 00:18:02.881 { 00:18:02.881 "name": "BaseBdev2", 00:18:02.881 "uuid": "5eea8e92-cf79-4a9b-bb85-17441cdf0fe3", 00:18:02.881 "is_configured": true, 00:18:02.881 "data_offset": 2048, 00:18:02.881 "data_size": 63488 00:18:02.881 }, 00:18:02.881 { 00:18:02.881 "name": "BaseBdev3", 00:18:02.881 "uuid": "e29a0f00-1e58-4acb-96c4-519465d616d6", 00:18:02.881 "is_configured": true, 00:18:02.881 "data_offset": 2048, 00:18:02.881 "data_size": 63488 00:18:02.881 } 00:18:02.881 ] 00:18:02.881 }' 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.881 15:44:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.150 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:03.150 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:03.150 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.150 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.150 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.150 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:03.150 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.150 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:03.150 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:03.150 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:03.150 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.150 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.150 [2024-12-06 15:44:46.381102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:03.150 [2024-12-06 15:44:46.381298] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.409 [2024-12-06 15:44:46.485056] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.409 [2024-12-06 15:44:46.540984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:03.409 [2024-12-06 15:44:46.541043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.409 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.669 BaseBdev2 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.669 [ 00:18:03.669 { 00:18:03.669 "name": "BaseBdev2", 00:18:03.669 "aliases": [ 00:18:03.669 "f9708cd0-c635-4cd5-9838-909aa292b0b4" 00:18:03.669 ], 00:18:03.669 "product_name": "Malloc disk", 00:18:03.669 "block_size": 512, 00:18:03.669 "num_blocks": 65536, 00:18:03.669 "uuid": "f9708cd0-c635-4cd5-9838-909aa292b0b4", 00:18:03.669 "assigned_rate_limits": { 00:18:03.669 "rw_ios_per_sec": 0, 00:18:03.669 "rw_mbytes_per_sec": 0, 00:18:03.669 "r_mbytes_per_sec": 0, 00:18:03.669 "w_mbytes_per_sec": 0 00:18:03.669 }, 00:18:03.669 "claimed": false, 00:18:03.669 "zoned": false, 00:18:03.669 "supported_io_types": { 00:18:03.669 "read": true, 00:18:03.669 "write": true, 00:18:03.669 "unmap": true, 00:18:03.669 "flush": true, 00:18:03.669 "reset": true, 00:18:03.669 "nvme_admin": false, 00:18:03.669 "nvme_io": false, 00:18:03.669 "nvme_io_md": false, 00:18:03.669 "write_zeroes": true, 00:18:03.669 "zcopy": true, 00:18:03.669 "get_zone_info": false, 00:18:03.669 "zone_management": false, 00:18:03.669 "zone_append": false, 00:18:03.669 "compare": false, 00:18:03.669 "compare_and_write": false, 00:18:03.669 "abort": true, 00:18:03.669 "seek_hole": false, 00:18:03.669 "seek_data": false, 00:18:03.669 "copy": true, 00:18:03.669 "nvme_iov_md": false 00:18:03.669 }, 00:18:03.669 "memory_domains": [ 00:18:03.669 { 00:18:03.669 "dma_device_id": "system", 00:18:03.669 "dma_device_type": 1 00:18:03.669 }, 00:18:03.669 { 00:18:03.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.669 "dma_device_type": 2 00:18:03.669 } 00:18:03.669 ], 00:18:03.669 "driver_specific": {} 00:18:03.669 } 00:18:03.669 ] 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.669 BaseBdev3 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.669 [ 00:18:03.669 { 00:18:03.669 "name": "BaseBdev3", 00:18:03.669 "aliases": [ 00:18:03.669 "290c5ec2-dfa9-4f93-b4b2-7daafe088414" 00:18:03.669 ], 00:18:03.669 "product_name": "Malloc disk", 00:18:03.669 "block_size": 512, 00:18:03.669 "num_blocks": 65536, 00:18:03.669 "uuid": "290c5ec2-dfa9-4f93-b4b2-7daafe088414", 00:18:03.669 "assigned_rate_limits": { 00:18:03.669 "rw_ios_per_sec": 0, 00:18:03.669 "rw_mbytes_per_sec": 0, 00:18:03.669 "r_mbytes_per_sec": 0, 00:18:03.669 "w_mbytes_per_sec": 0 00:18:03.669 }, 00:18:03.669 "claimed": false, 00:18:03.669 "zoned": false, 00:18:03.669 "supported_io_types": { 00:18:03.669 "read": true, 00:18:03.669 "write": true, 00:18:03.669 "unmap": true, 00:18:03.669 "flush": true, 00:18:03.669 "reset": true, 00:18:03.669 "nvme_admin": false, 00:18:03.669 "nvme_io": false, 00:18:03.669 "nvme_io_md": false, 00:18:03.669 "write_zeroes": true, 00:18:03.669 "zcopy": true, 00:18:03.669 "get_zone_info": false, 00:18:03.669 "zone_management": false, 00:18:03.669 "zone_append": false, 00:18:03.669 "compare": false, 00:18:03.669 "compare_and_write": false, 00:18:03.669 "abort": true, 00:18:03.669 "seek_hole": false, 00:18:03.669 "seek_data": false, 00:18:03.669 "copy": true, 00:18:03.669 "nvme_iov_md": false 00:18:03.669 }, 00:18:03.669 "memory_domains": [ 00:18:03.669 { 00:18:03.669 "dma_device_id": "system", 00:18:03.669 "dma_device_type": 1 00:18:03.669 }, 00:18:03.669 { 00:18:03.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.669 "dma_device_type": 2 00:18:03.669 } 00:18:03.669 ], 00:18:03.669 "driver_specific": {} 00:18:03.669 } 00:18:03.669 ] 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.669 [2024-12-06 15:44:46.873631] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:03.669 [2024-12-06 15:44:46.873683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:03.669 [2024-12-06 15:44:46.873708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:03.669 [2024-12-06 15:44:46.876032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.669 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.669 "name": "Existed_Raid", 00:18:03.669 "uuid": "230fae98-e7ee-46b6-bb2d-8aa445c6a490", 00:18:03.669 "strip_size_kb": 64, 00:18:03.669 "state": "configuring", 00:18:03.669 "raid_level": "raid5f", 00:18:03.669 "superblock": true, 00:18:03.669 "num_base_bdevs": 3, 00:18:03.669 "num_base_bdevs_discovered": 2, 00:18:03.669 "num_base_bdevs_operational": 3, 00:18:03.669 "base_bdevs_list": [ 00:18:03.669 { 00:18:03.669 "name": "BaseBdev1", 00:18:03.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.669 "is_configured": false, 00:18:03.669 "data_offset": 0, 00:18:03.669 "data_size": 0 00:18:03.669 }, 00:18:03.669 { 00:18:03.669 "name": "BaseBdev2", 00:18:03.669 "uuid": "f9708cd0-c635-4cd5-9838-909aa292b0b4", 00:18:03.669 "is_configured": true, 00:18:03.670 "data_offset": 2048, 00:18:03.670 "data_size": 63488 00:18:03.670 }, 00:18:03.670 { 00:18:03.670 "name": "BaseBdev3", 00:18:03.670 "uuid": "290c5ec2-dfa9-4f93-b4b2-7daafe088414", 00:18:03.670 "is_configured": true, 00:18:03.670 "data_offset": 2048, 00:18:03.670 "data_size": 63488 00:18:03.670 } 00:18:03.670 ] 00:18:03.670 }' 00:18:03.670 15:44:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.670 15:44:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.235 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:04.235 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.235 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.235 [2024-12-06 15:44:47.277029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:04.235 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.236 "name": "Existed_Raid", 00:18:04.236 "uuid": "230fae98-e7ee-46b6-bb2d-8aa445c6a490", 00:18:04.236 "strip_size_kb": 64, 00:18:04.236 "state": "configuring", 00:18:04.236 "raid_level": "raid5f", 00:18:04.236 "superblock": true, 00:18:04.236 "num_base_bdevs": 3, 00:18:04.236 "num_base_bdevs_discovered": 1, 00:18:04.236 "num_base_bdevs_operational": 3, 00:18:04.236 "base_bdevs_list": [ 00:18:04.236 { 00:18:04.236 "name": "BaseBdev1", 00:18:04.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.236 "is_configured": false, 00:18:04.236 "data_offset": 0, 00:18:04.236 "data_size": 0 00:18:04.236 }, 00:18:04.236 { 00:18:04.236 "name": null, 00:18:04.236 "uuid": "f9708cd0-c635-4cd5-9838-909aa292b0b4", 00:18:04.236 "is_configured": false, 00:18:04.236 "data_offset": 0, 00:18:04.236 "data_size": 63488 00:18:04.236 }, 00:18:04.236 { 00:18:04.236 "name": "BaseBdev3", 00:18:04.236 "uuid": "290c5ec2-dfa9-4f93-b4b2-7daafe088414", 00:18:04.236 "is_configured": true, 00:18:04.236 "data_offset": 2048, 00:18:04.236 "data_size": 63488 00:18:04.236 } 00:18:04.236 ] 00:18:04.236 }' 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.236 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.494 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.494 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.494 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.494 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:04.494 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.494 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:04.494 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:04.494 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.494 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.494 [2024-12-06 15:44:47.784660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.753 BaseBdev1 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.753 [ 00:18:04.753 { 00:18:04.753 "name": "BaseBdev1", 00:18:04.753 "aliases": [ 00:18:04.753 "38115cc8-7d96-4df3-be56-b8ed2736c343" 00:18:04.753 ], 00:18:04.753 "product_name": "Malloc disk", 00:18:04.753 "block_size": 512, 00:18:04.753 "num_blocks": 65536, 00:18:04.753 "uuid": "38115cc8-7d96-4df3-be56-b8ed2736c343", 00:18:04.753 "assigned_rate_limits": { 00:18:04.753 "rw_ios_per_sec": 0, 00:18:04.753 "rw_mbytes_per_sec": 0, 00:18:04.753 "r_mbytes_per_sec": 0, 00:18:04.753 "w_mbytes_per_sec": 0 00:18:04.753 }, 00:18:04.753 "claimed": true, 00:18:04.753 "claim_type": "exclusive_write", 00:18:04.753 "zoned": false, 00:18:04.753 "supported_io_types": { 00:18:04.753 "read": true, 00:18:04.753 "write": true, 00:18:04.753 "unmap": true, 00:18:04.753 "flush": true, 00:18:04.753 "reset": true, 00:18:04.753 "nvme_admin": false, 00:18:04.753 "nvme_io": false, 00:18:04.753 "nvme_io_md": false, 00:18:04.753 "write_zeroes": true, 00:18:04.753 "zcopy": true, 00:18:04.753 "get_zone_info": false, 00:18:04.753 "zone_management": false, 00:18:04.753 "zone_append": false, 00:18:04.753 "compare": false, 00:18:04.753 "compare_and_write": false, 00:18:04.753 "abort": true, 00:18:04.753 "seek_hole": false, 00:18:04.753 "seek_data": false, 00:18:04.753 "copy": true, 00:18:04.753 "nvme_iov_md": false 00:18:04.753 }, 00:18:04.753 "memory_domains": [ 00:18:04.753 { 00:18:04.753 "dma_device_id": "system", 00:18:04.753 "dma_device_type": 1 00:18:04.753 }, 00:18:04.753 { 00:18:04.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.753 "dma_device_type": 2 00:18:04.753 } 00:18:04.753 ], 00:18:04.753 "driver_specific": {} 00:18:04.753 } 00:18:04.753 ] 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.753 "name": "Existed_Raid", 00:18:04.753 "uuid": "230fae98-e7ee-46b6-bb2d-8aa445c6a490", 00:18:04.753 "strip_size_kb": 64, 00:18:04.753 "state": "configuring", 00:18:04.753 "raid_level": "raid5f", 00:18:04.753 "superblock": true, 00:18:04.753 "num_base_bdevs": 3, 00:18:04.753 "num_base_bdevs_discovered": 2, 00:18:04.753 "num_base_bdevs_operational": 3, 00:18:04.753 "base_bdevs_list": [ 00:18:04.753 { 00:18:04.753 "name": "BaseBdev1", 00:18:04.753 "uuid": "38115cc8-7d96-4df3-be56-b8ed2736c343", 00:18:04.753 "is_configured": true, 00:18:04.753 "data_offset": 2048, 00:18:04.753 "data_size": 63488 00:18:04.753 }, 00:18:04.753 { 00:18:04.753 "name": null, 00:18:04.753 "uuid": "f9708cd0-c635-4cd5-9838-909aa292b0b4", 00:18:04.753 "is_configured": false, 00:18:04.753 "data_offset": 0, 00:18:04.753 "data_size": 63488 00:18:04.753 }, 00:18:04.753 { 00:18:04.753 "name": "BaseBdev3", 00:18:04.753 "uuid": "290c5ec2-dfa9-4f93-b4b2-7daafe088414", 00:18:04.753 "is_configured": true, 00:18:04.753 "data_offset": 2048, 00:18:04.753 "data_size": 63488 00:18:04.753 } 00:18:04.753 ] 00:18:04.753 }' 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.753 15:44:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.012 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.012 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.012 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.012 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:05.012 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.271 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:05.271 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:05.271 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.271 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.271 [2024-12-06 15:44:48.316072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:05.271 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.271 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:05.271 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.271 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.271 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.271 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.272 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:05.272 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.272 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.272 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.272 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.272 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.272 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.272 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.272 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.272 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.272 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.272 "name": "Existed_Raid", 00:18:05.272 "uuid": "230fae98-e7ee-46b6-bb2d-8aa445c6a490", 00:18:05.272 "strip_size_kb": 64, 00:18:05.272 "state": "configuring", 00:18:05.272 "raid_level": "raid5f", 00:18:05.272 "superblock": true, 00:18:05.272 "num_base_bdevs": 3, 00:18:05.272 "num_base_bdevs_discovered": 1, 00:18:05.272 "num_base_bdevs_operational": 3, 00:18:05.272 "base_bdevs_list": [ 00:18:05.272 { 00:18:05.272 "name": "BaseBdev1", 00:18:05.272 "uuid": "38115cc8-7d96-4df3-be56-b8ed2736c343", 00:18:05.272 "is_configured": true, 00:18:05.272 "data_offset": 2048, 00:18:05.272 "data_size": 63488 00:18:05.272 }, 00:18:05.272 { 00:18:05.272 "name": null, 00:18:05.272 "uuid": "f9708cd0-c635-4cd5-9838-909aa292b0b4", 00:18:05.272 "is_configured": false, 00:18:05.272 "data_offset": 0, 00:18:05.272 "data_size": 63488 00:18:05.272 }, 00:18:05.272 { 00:18:05.272 "name": null, 00:18:05.272 "uuid": "290c5ec2-dfa9-4f93-b4b2-7daafe088414", 00:18:05.272 "is_configured": false, 00:18:05.272 "data_offset": 0, 00:18:05.272 "data_size": 63488 00:18:05.272 } 00:18:05.272 ] 00:18:05.272 }' 00:18:05.272 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.272 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.531 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:05.531 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.531 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.532 [2024-12-06 15:44:48.759589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.532 "name": "Existed_Raid", 00:18:05.532 "uuid": "230fae98-e7ee-46b6-bb2d-8aa445c6a490", 00:18:05.532 "strip_size_kb": 64, 00:18:05.532 "state": "configuring", 00:18:05.532 "raid_level": "raid5f", 00:18:05.532 "superblock": true, 00:18:05.532 "num_base_bdevs": 3, 00:18:05.532 "num_base_bdevs_discovered": 2, 00:18:05.532 "num_base_bdevs_operational": 3, 00:18:05.532 "base_bdevs_list": [ 00:18:05.532 { 00:18:05.532 "name": "BaseBdev1", 00:18:05.532 "uuid": "38115cc8-7d96-4df3-be56-b8ed2736c343", 00:18:05.532 "is_configured": true, 00:18:05.532 "data_offset": 2048, 00:18:05.532 "data_size": 63488 00:18:05.532 }, 00:18:05.532 { 00:18:05.532 "name": null, 00:18:05.532 "uuid": "f9708cd0-c635-4cd5-9838-909aa292b0b4", 00:18:05.532 "is_configured": false, 00:18:05.532 "data_offset": 0, 00:18:05.532 "data_size": 63488 00:18:05.532 }, 00:18:05.532 { 00:18:05.532 "name": "BaseBdev3", 00:18:05.532 "uuid": "290c5ec2-dfa9-4f93-b4b2-7daafe088414", 00:18:05.532 "is_configured": true, 00:18:05.532 "data_offset": 2048, 00:18:05.532 "data_size": 63488 00:18:05.532 } 00:18:05.532 ] 00:18:05.532 }' 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.532 15:44:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.101 [2024-12-06 15:44:49.179011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.101 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.101 "name": "Existed_Raid", 00:18:06.101 "uuid": "230fae98-e7ee-46b6-bb2d-8aa445c6a490", 00:18:06.101 "strip_size_kb": 64, 00:18:06.101 "state": "configuring", 00:18:06.101 "raid_level": "raid5f", 00:18:06.101 "superblock": true, 00:18:06.101 "num_base_bdevs": 3, 00:18:06.101 "num_base_bdevs_discovered": 1, 00:18:06.101 "num_base_bdevs_operational": 3, 00:18:06.101 "base_bdevs_list": [ 00:18:06.101 { 00:18:06.101 "name": null, 00:18:06.102 "uuid": "38115cc8-7d96-4df3-be56-b8ed2736c343", 00:18:06.102 "is_configured": false, 00:18:06.102 "data_offset": 0, 00:18:06.102 "data_size": 63488 00:18:06.102 }, 00:18:06.102 { 00:18:06.102 "name": null, 00:18:06.102 "uuid": "f9708cd0-c635-4cd5-9838-909aa292b0b4", 00:18:06.102 "is_configured": false, 00:18:06.102 "data_offset": 0, 00:18:06.102 "data_size": 63488 00:18:06.102 }, 00:18:06.102 { 00:18:06.102 "name": "BaseBdev3", 00:18:06.102 "uuid": "290c5ec2-dfa9-4f93-b4b2-7daafe088414", 00:18:06.102 "is_configured": true, 00:18:06.102 "data_offset": 2048, 00:18:06.102 "data_size": 63488 00:18:06.102 } 00:18:06.102 ] 00:18:06.102 }' 00:18:06.102 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.102 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.671 [2024-12-06 15:44:49.719146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.671 "name": "Existed_Raid", 00:18:06.671 "uuid": "230fae98-e7ee-46b6-bb2d-8aa445c6a490", 00:18:06.671 "strip_size_kb": 64, 00:18:06.671 "state": "configuring", 00:18:06.671 "raid_level": "raid5f", 00:18:06.671 "superblock": true, 00:18:06.671 "num_base_bdevs": 3, 00:18:06.671 "num_base_bdevs_discovered": 2, 00:18:06.671 "num_base_bdevs_operational": 3, 00:18:06.671 "base_bdevs_list": [ 00:18:06.671 { 00:18:06.671 "name": null, 00:18:06.671 "uuid": "38115cc8-7d96-4df3-be56-b8ed2736c343", 00:18:06.671 "is_configured": false, 00:18:06.671 "data_offset": 0, 00:18:06.671 "data_size": 63488 00:18:06.671 }, 00:18:06.671 { 00:18:06.671 "name": "BaseBdev2", 00:18:06.671 "uuid": "f9708cd0-c635-4cd5-9838-909aa292b0b4", 00:18:06.671 "is_configured": true, 00:18:06.671 "data_offset": 2048, 00:18:06.671 "data_size": 63488 00:18:06.671 }, 00:18:06.671 { 00:18:06.671 "name": "BaseBdev3", 00:18:06.671 "uuid": "290c5ec2-dfa9-4f93-b4b2-7daafe088414", 00:18:06.671 "is_configured": true, 00:18:06.671 "data_offset": 2048, 00:18:06.671 "data_size": 63488 00:18:06.671 } 00:18:06.671 ] 00:18:06.671 }' 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.671 15:44:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.930 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:06.930 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.930 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.930 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.930 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.930 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:06.930 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.930 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.930 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.930 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:06.930 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.189 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 38115cc8-7d96-4df3-be56-b8ed2736c343 00:18:07.189 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.189 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.189 [2024-12-06 15:44:50.273316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:07.189 [2024-12-06 15:44:50.273628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:07.189 [2024-12-06 15:44:50.273651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:07.189 [2024-12-06 15:44:50.273939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:07.189 NewBaseBdev 00:18:07.189 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.189 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:07.189 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:07.189 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:07.189 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:07.189 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:07.189 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:07.189 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:07.189 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.190 [2024-12-06 15:44:50.279473] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:07.190 [2024-12-06 15:44:50.279519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:07.190 [2024-12-06 15:44:50.279700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.190 [ 00:18:07.190 { 00:18:07.190 "name": "NewBaseBdev", 00:18:07.190 "aliases": [ 00:18:07.190 "38115cc8-7d96-4df3-be56-b8ed2736c343" 00:18:07.190 ], 00:18:07.190 "product_name": "Malloc disk", 00:18:07.190 "block_size": 512, 00:18:07.190 "num_blocks": 65536, 00:18:07.190 "uuid": "38115cc8-7d96-4df3-be56-b8ed2736c343", 00:18:07.190 "assigned_rate_limits": { 00:18:07.190 "rw_ios_per_sec": 0, 00:18:07.190 "rw_mbytes_per_sec": 0, 00:18:07.190 "r_mbytes_per_sec": 0, 00:18:07.190 "w_mbytes_per_sec": 0 00:18:07.190 }, 00:18:07.190 "claimed": true, 00:18:07.190 "claim_type": "exclusive_write", 00:18:07.190 "zoned": false, 00:18:07.190 "supported_io_types": { 00:18:07.190 "read": true, 00:18:07.190 "write": true, 00:18:07.190 "unmap": true, 00:18:07.190 "flush": true, 00:18:07.190 "reset": true, 00:18:07.190 "nvme_admin": false, 00:18:07.190 "nvme_io": false, 00:18:07.190 "nvme_io_md": false, 00:18:07.190 "write_zeroes": true, 00:18:07.190 "zcopy": true, 00:18:07.190 "get_zone_info": false, 00:18:07.190 "zone_management": false, 00:18:07.190 "zone_append": false, 00:18:07.190 "compare": false, 00:18:07.190 "compare_and_write": false, 00:18:07.190 "abort": true, 00:18:07.190 "seek_hole": false, 00:18:07.190 "seek_data": false, 00:18:07.190 "copy": true, 00:18:07.190 "nvme_iov_md": false 00:18:07.190 }, 00:18:07.190 "memory_domains": [ 00:18:07.190 { 00:18:07.190 "dma_device_id": "system", 00:18:07.190 "dma_device_type": 1 00:18:07.190 }, 00:18:07.190 { 00:18:07.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.190 "dma_device_type": 2 00:18:07.190 } 00:18:07.190 ], 00:18:07.190 "driver_specific": {} 00:18:07.190 } 00:18:07.190 ] 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.190 "name": "Existed_Raid", 00:18:07.190 "uuid": "230fae98-e7ee-46b6-bb2d-8aa445c6a490", 00:18:07.190 "strip_size_kb": 64, 00:18:07.190 "state": "online", 00:18:07.190 "raid_level": "raid5f", 00:18:07.190 "superblock": true, 00:18:07.190 "num_base_bdevs": 3, 00:18:07.190 "num_base_bdevs_discovered": 3, 00:18:07.190 "num_base_bdevs_operational": 3, 00:18:07.190 "base_bdevs_list": [ 00:18:07.190 { 00:18:07.190 "name": "NewBaseBdev", 00:18:07.190 "uuid": "38115cc8-7d96-4df3-be56-b8ed2736c343", 00:18:07.190 "is_configured": true, 00:18:07.190 "data_offset": 2048, 00:18:07.190 "data_size": 63488 00:18:07.190 }, 00:18:07.190 { 00:18:07.190 "name": "BaseBdev2", 00:18:07.190 "uuid": "f9708cd0-c635-4cd5-9838-909aa292b0b4", 00:18:07.190 "is_configured": true, 00:18:07.190 "data_offset": 2048, 00:18:07.190 "data_size": 63488 00:18:07.190 }, 00:18:07.190 { 00:18:07.190 "name": "BaseBdev3", 00:18:07.190 "uuid": "290c5ec2-dfa9-4f93-b4b2-7daafe088414", 00:18:07.190 "is_configured": true, 00:18:07.190 "data_offset": 2048, 00:18:07.190 "data_size": 63488 00:18:07.190 } 00:18:07.190 ] 00:18:07.190 }' 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.190 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.449 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:07.449 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:07.449 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:07.450 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:07.450 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:07.450 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:07.450 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:07.450 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:07.450 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.450 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.450 [2024-12-06 15:44:50.734360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:07.710 "name": "Existed_Raid", 00:18:07.710 "aliases": [ 00:18:07.710 "230fae98-e7ee-46b6-bb2d-8aa445c6a490" 00:18:07.710 ], 00:18:07.710 "product_name": "Raid Volume", 00:18:07.710 "block_size": 512, 00:18:07.710 "num_blocks": 126976, 00:18:07.710 "uuid": "230fae98-e7ee-46b6-bb2d-8aa445c6a490", 00:18:07.710 "assigned_rate_limits": { 00:18:07.710 "rw_ios_per_sec": 0, 00:18:07.710 "rw_mbytes_per_sec": 0, 00:18:07.710 "r_mbytes_per_sec": 0, 00:18:07.710 "w_mbytes_per_sec": 0 00:18:07.710 }, 00:18:07.710 "claimed": false, 00:18:07.710 "zoned": false, 00:18:07.710 "supported_io_types": { 00:18:07.710 "read": true, 00:18:07.710 "write": true, 00:18:07.710 "unmap": false, 00:18:07.710 "flush": false, 00:18:07.710 "reset": true, 00:18:07.710 "nvme_admin": false, 00:18:07.710 "nvme_io": false, 00:18:07.710 "nvme_io_md": false, 00:18:07.710 "write_zeroes": true, 00:18:07.710 "zcopy": false, 00:18:07.710 "get_zone_info": false, 00:18:07.710 "zone_management": false, 00:18:07.710 "zone_append": false, 00:18:07.710 "compare": false, 00:18:07.710 "compare_and_write": false, 00:18:07.710 "abort": false, 00:18:07.710 "seek_hole": false, 00:18:07.710 "seek_data": false, 00:18:07.710 "copy": false, 00:18:07.710 "nvme_iov_md": false 00:18:07.710 }, 00:18:07.710 "driver_specific": { 00:18:07.710 "raid": { 00:18:07.710 "uuid": "230fae98-e7ee-46b6-bb2d-8aa445c6a490", 00:18:07.710 "strip_size_kb": 64, 00:18:07.710 "state": "online", 00:18:07.710 "raid_level": "raid5f", 00:18:07.710 "superblock": true, 00:18:07.710 "num_base_bdevs": 3, 00:18:07.710 "num_base_bdevs_discovered": 3, 00:18:07.710 "num_base_bdevs_operational": 3, 00:18:07.710 "base_bdevs_list": [ 00:18:07.710 { 00:18:07.710 "name": "NewBaseBdev", 00:18:07.710 "uuid": "38115cc8-7d96-4df3-be56-b8ed2736c343", 00:18:07.710 "is_configured": true, 00:18:07.710 "data_offset": 2048, 00:18:07.710 "data_size": 63488 00:18:07.710 }, 00:18:07.710 { 00:18:07.710 "name": "BaseBdev2", 00:18:07.710 "uuid": "f9708cd0-c635-4cd5-9838-909aa292b0b4", 00:18:07.710 "is_configured": true, 00:18:07.710 "data_offset": 2048, 00:18:07.710 "data_size": 63488 00:18:07.710 }, 00:18:07.710 { 00:18:07.710 "name": "BaseBdev3", 00:18:07.710 "uuid": "290c5ec2-dfa9-4f93-b4b2-7daafe088414", 00:18:07.710 "is_configured": true, 00:18:07.710 "data_offset": 2048, 00:18:07.710 "data_size": 63488 00:18:07.710 } 00:18:07.710 ] 00:18:07.710 } 00:18:07.710 } 00:18:07.710 }' 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:07.710 BaseBdev2 00:18:07.710 BaseBdev3' 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.710 [2024-12-06 15:44:50.974062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:07.710 [2024-12-06 15:44:50.974097] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.710 [2024-12-06 15:44:50.974182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.710 [2024-12-06 15:44:50.974520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.710 [2024-12-06 15:44:50.974542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80529 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80529 ']' 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80529 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.710 15:44:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80529 00:18:07.973 killing process with pid 80529 00:18:07.973 15:44:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:07.973 15:44:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:07.973 15:44:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80529' 00:18:07.973 15:44:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80529 00:18:07.973 [2024-12-06 15:44:51.013774] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.973 15:44:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80529 00:18:08.231 [2024-12-06 15:44:51.342388] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:09.607 15:44:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:09.607 00:18:09.607 real 0m10.293s 00:18:09.607 user 0m15.931s 00:18:09.607 sys 0m2.221s 00:18:09.607 15:44:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.607 ************************************ 00:18:09.607 END TEST raid5f_state_function_test_sb 00:18:09.607 15:44:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.607 ************************************ 00:18:09.607 15:44:52 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:18:09.607 15:44:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:09.607 15:44:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.607 15:44:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:09.607 ************************************ 00:18:09.607 START TEST raid5f_superblock_test 00:18:09.607 ************************************ 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81146 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81146 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81146 ']' 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.607 15:44:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.607 [2024-12-06 15:44:52.771213] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:18:09.607 [2024-12-06 15:44:52.771369] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81146 ] 00:18:09.867 [2024-12-06 15:44:52.951281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.867 [2024-12-06 15:44:53.096296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.126 [2024-12-06 15:44:53.344035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.126 [2024-12-06 15:44:53.344079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.387 malloc1 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.387 [2024-12-06 15:44:53.659750] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:10.387 [2024-12-06 15:44:53.659824] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.387 [2024-12-06 15:44:53.659851] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:10.387 [2024-12-06 15:44:53.659864] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.387 [2024-12-06 15:44:53.662621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.387 [2024-12-06 15:44:53.662665] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:10.387 pt1 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.387 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.647 malloc2 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.647 [2024-12-06 15:44:53.723735] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:10.647 [2024-12-06 15:44:53.723797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.647 [2024-12-06 15:44:53.723830] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:10.647 [2024-12-06 15:44:53.723843] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.647 [2024-12-06 15:44:53.726529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.647 [2024-12-06 15:44:53.726571] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:10.647 pt2 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.647 malloc3 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.647 [2024-12-06 15:44:53.799949] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:10.647 [2024-12-06 15:44:53.800005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.647 [2024-12-06 15:44:53.800030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:10.647 [2024-12-06 15:44:53.800042] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.647 [2024-12-06 15:44:53.802727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.647 [2024-12-06 15:44:53.802769] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:10.647 pt3 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.647 [2024-12-06 15:44:53.811991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:10.647 [2024-12-06 15:44:53.814391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:10.647 [2024-12-06 15:44:53.814470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:10.647 [2024-12-06 15:44:53.814667] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:10.647 [2024-12-06 15:44:53.814696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:10.647 [2024-12-06 15:44:53.814965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:10.647 [2024-12-06 15:44:53.821087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:10.647 [2024-12-06 15:44:53.821112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:10.647 [2024-12-06 15:44:53.821312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.647 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.647 "name": "raid_bdev1", 00:18:10.647 "uuid": "ab677845-2cc9-4139-a7b8-3e85c8cbe8c1", 00:18:10.647 "strip_size_kb": 64, 00:18:10.647 "state": "online", 00:18:10.647 "raid_level": "raid5f", 00:18:10.647 "superblock": true, 00:18:10.647 "num_base_bdevs": 3, 00:18:10.647 "num_base_bdevs_discovered": 3, 00:18:10.647 "num_base_bdevs_operational": 3, 00:18:10.647 "base_bdevs_list": [ 00:18:10.647 { 00:18:10.647 "name": "pt1", 00:18:10.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:10.648 "is_configured": true, 00:18:10.648 "data_offset": 2048, 00:18:10.648 "data_size": 63488 00:18:10.648 }, 00:18:10.648 { 00:18:10.648 "name": "pt2", 00:18:10.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.648 "is_configured": true, 00:18:10.648 "data_offset": 2048, 00:18:10.648 "data_size": 63488 00:18:10.648 }, 00:18:10.648 { 00:18:10.648 "name": "pt3", 00:18:10.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:10.648 "is_configured": true, 00:18:10.648 "data_offset": 2048, 00:18:10.648 "data_size": 63488 00:18:10.648 } 00:18:10.648 ] 00:18:10.648 }' 00:18:10.648 15:44:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.648 15:44:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.217 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:11.217 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:11.217 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:11.217 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:11.217 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:11.217 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:11.217 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:11.217 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.217 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.217 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:11.217 [2024-12-06 15:44:54.247954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.217 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.217 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:11.217 "name": "raid_bdev1", 00:18:11.217 "aliases": [ 00:18:11.217 "ab677845-2cc9-4139-a7b8-3e85c8cbe8c1" 00:18:11.217 ], 00:18:11.217 "product_name": "Raid Volume", 00:18:11.217 "block_size": 512, 00:18:11.217 "num_blocks": 126976, 00:18:11.217 "uuid": "ab677845-2cc9-4139-a7b8-3e85c8cbe8c1", 00:18:11.217 "assigned_rate_limits": { 00:18:11.217 "rw_ios_per_sec": 0, 00:18:11.217 "rw_mbytes_per_sec": 0, 00:18:11.217 "r_mbytes_per_sec": 0, 00:18:11.217 "w_mbytes_per_sec": 0 00:18:11.217 }, 00:18:11.217 "claimed": false, 00:18:11.217 "zoned": false, 00:18:11.217 "supported_io_types": { 00:18:11.217 "read": true, 00:18:11.217 "write": true, 00:18:11.218 "unmap": false, 00:18:11.218 "flush": false, 00:18:11.218 "reset": true, 00:18:11.218 "nvme_admin": false, 00:18:11.218 "nvme_io": false, 00:18:11.218 "nvme_io_md": false, 00:18:11.218 "write_zeroes": true, 00:18:11.218 "zcopy": false, 00:18:11.218 "get_zone_info": false, 00:18:11.218 "zone_management": false, 00:18:11.218 "zone_append": false, 00:18:11.218 "compare": false, 00:18:11.218 "compare_and_write": false, 00:18:11.218 "abort": false, 00:18:11.218 "seek_hole": false, 00:18:11.218 "seek_data": false, 00:18:11.218 "copy": false, 00:18:11.218 "nvme_iov_md": false 00:18:11.218 }, 00:18:11.218 "driver_specific": { 00:18:11.218 "raid": { 00:18:11.218 "uuid": "ab677845-2cc9-4139-a7b8-3e85c8cbe8c1", 00:18:11.218 "strip_size_kb": 64, 00:18:11.218 "state": "online", 00:18:11.218 "raid_level": "raid5f", 00:18:11.218 "superblock": true, 00:18:11.218 "num_base_bdevs": 3, 00:18:11.218 "num_base_bdevs_discovered": 3, 00:18:11.218 "num_base_bdevs_operational": 3, 00:18:11.218 "base_bdevs_list": [ 00:18:11.218 { 00:18:11.218 "name": "pt1", 00:18:11.218 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.218 "is_configured": true, 00:18:11.218 "data_offset": 2048, 00:18:11.218 "data_size": 63488 00:18:11.218 }, 00:18:11.218 { 00:18:11.218 "name": "pt2", 00:18:11.218 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.218 "is_configured": true, 00:18:11.218 "data_offset": 2048, 00:18:11.218 "data_size": 63488 00:18:11.218 }, 00:18:11.218 { 00:18:11.218 "name": "pt3", 00:18:11.218 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:11.218 "is_configured": true, 00:18:11.218 "data_offset": 2048, 00:18:11.218 "data_size": 63488 00:18:11.218 } 00:18:11.218 ] 00:18:11.218 } 00:18:11.218 } 00:18:11.218 }' 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:11.218 pt2 00:18:11.218 pt3' 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:11.218 [2024-12-06 15:44:54.479775] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.218 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ab677845-2cc9-4139-a7b8-3e85c8cbe8c1 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ab677845-2cc9-4139-a7b8-3e85c8cbe8c1 ']' 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.479 [2024-12-06 15:44:54.519610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.479 [2024-12-06 15:44:54.519644] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.479 [2024-12-06 15:44:54.519725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.479 [2024-12-06 15:44:54.519810] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.479 [2024-12-06 15:44:54.519822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.479 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.479 [2024-12-06 15:44:54.663548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:11.479 [2024-12-06 15:44:54.665928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:11.479 [2024-12-06 15:44:54.665990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:11.479 [2024-12-06 15:44:54.666057] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:11.479 [2024-12-06 15:44:54.666109] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:11.479 [2024-12-06 15:44:54.666131] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:11.480 [2024-12-06 15:44:54.666152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.480 [2024-12-06 15:44:54.666163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:11.480 request: 00:18:11.480 { 00:18:11.480 "name": "raid_bdev1", 00:18:11.480 "raid_level": "raid5f", 00:18:11.480 "base_bdevs": [ 00:18:11.480 "malloc1", 00:18:11.480 "malloc2", 00:18:11.480 "malloc3" 00:18:11.480 ], 00:18:11.480 "strip_size_kb": 64, 00:18:11.480 "superblock": false, 00:18:11.480 "method": "bdev_raid_create", 00:18:11.480 "req_id": 1 00:18:11.480 } 00:18:11.480 Got JSON-RPC error response 00:18:11.480 response: 00:18:11.480 { 00:18:11.480 "code": -17, 00:18:11.480 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:11.480 } 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.480 [2024-12-06 15:44:54.723385] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:11.480 [2024-12-06 15:44:54.723438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.480 [2024-12-06 15:44:54.723461] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:11.480 [2024-12-06 15:44:54.723472] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.480 [2024-12-06 15:44:54.726307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.480 [2024-12-06 15:44:54.726453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:11.480 [2024-12-06 15:44:54.726569] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:11.480 [2024-12-06 15:44:54.726636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:11.480 pt1 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.480 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.745 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.745 "name": "raid_bdev1", 00:18:11.745 "uuid": "ab677845-2cc9-4139-a7b8-3e85c8cbe8c1", 00:18:11.745 "strip_size_kb": 64, 00:18:11.745 "state": "configuring", 00:18:11.745 "raid_level": "raid5f", 00:18:11.745 "superblock": true, 00:18:11.745 "num_base_bdevs": 3, 00:18:11.745 "num_base_bdevs_discovered": 1, 00:18:11.745 "num_base_bdevs_operational": 3, 00:18:11.745 "base_bdevs_list": [ 00:18:11.745 { 00:18:11.745 "name": "pt1", 00:18:11.745 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.745 "is_configured": true, 00:18:11.745 "data_offset": 2048, 00:18:11.745 "data_size": 63488 00:18:11.745 }, 00:18:11.745 { 00:18:11.745 "name": null, 00:18:11.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.745 "is_configured": false, 00:18:11.745 "data_offset": 2048, 00:18:11.745 "data_size": 63488 00:18:11.745 }, 00:18:11.745 { 00:18:11.745 "name": null, 00:18:11.745 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:11.745 "is_configured": false, 00:18:11.745 "data_offset": 2048, 00:18:11.745 "data_size": 63488 00:18:11.745 } 00:18:11.745 ] 00:18:11.745 }' 00:18:11.745 15:44:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.745 15:44:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.070 [2024-12-06 15:44:55.150807] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:12.070 [2024-12-06 15:44:55.150874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.070 [2024-12-06 15:44:55.150903] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:12.070 [2024-12-06 15:44:55.150915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.070 [2024-12-06 15:44:55.151426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.070 [2024-12-06 15:44:55.151458] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:12.070 [2024-12-06 15:44:55.151567] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:12.070 [2024-12-06 15:44:55.151604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:12.070 pt2 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.070 [2024-12-06 15:44:55.158805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.070 "name": "raid_bdev1", 00:18:12.070 "uuid": "ab677845-2cc9-4139-a7b8-3e85c8cbe8c1", 00:18:12.070 "strip_size_kb": 64, 00:18:12.070 "state": "configuring", 00:18:12.070 "raid_level": "raid5f", 00:18:12.070 "superblock": true, 00:18:12.070 "num_base_bdevs": 3, 00:18:12.070 "num_base_bdevs_discovered": 1, 00:18:12.070 "num_base_bdevs_operational": 3, 00:18:12.070 "base_bdevs_list": [ 00:18:12.070 { 00:18:12.070 "name": "pt1", 00:18:12.070 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.070 "is_configured": true, 00:18:12.070 "data_offset": 2048, 00:18:12.070 "data_size": 63488 00:18:12.070 }, 00:18:12.070 { 00:18:12.070 "name": null, 00:18:12.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.070 "is_configured": false, 00:18:12.070 "data_offset": 0, 00:18:12.070 "data_size": 63488 00:18:12.070 }, 00:18:12.070 { 00:18:12.070 "name": null, 00:18:12.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.070 "is_configured": false, 00:18:12.070 "data_offset": 2048, 00:18:12.070 "data_size": 63488 00:18:12.070 } 00:18:12.070 ] 00:18:12.070 }' 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.070 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.337 [2024-12-06 15:44:55.550515] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:12.337 [2024-12-06 15:44:55.550603] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.337 [2024-12-06 15:44:55.550627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:12.337 [2024-12-06 15:44:55.550642] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.337 [2024-12-06 15:44:55.551231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.337 [2024-12-06 15:44:55.551265] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:12.337 [2024-12-06 15:44:55.551367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:12.337 [2024-12-06 15:44:55.551399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:12.337 pt2 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.337 [2024-12-06 15:44:55.562458] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:12.337 [2024-12-06 15:44:55.562536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.337 [2024-12-06 15:44:55.562555] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:12.337 [2024-12-06 15:44:55.562568] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.337 [2024-12-06 15:44:55.563030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.337 [2024-12-06 15:44:55.563061] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:12.337 [2024-12-06 15:44:55.563129] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:12.337 [2024-12-06 15:44:55.563154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:12.337 [2024-12-06 15:44:55.563298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:12.337 [2024-12-06 15:44:55.563313] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:12.337 [2024-12-06 15:44:55.563604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:12.337 [2024-12-06 15:44:55.569057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:12.337 [2024-12-06 15:44:55.569080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:12.337 [2024-12-06 15:44:55.569283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.337 pt3 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.337 "name": "raid_bdev1", 00:18:12.337 "uuid": "ab677845-2cc9-4139-a7b8-3e85c8cbe8c1", 00:18:12.337 "strip_size_kb": 64, 00:18:12.337 "state": "online", 00:18:12.337 "raid_level": "raid5f", 00:18:12.337 "superblock": true, 00:18:12.337 "num_base_bdevs": 3, 00:18:12.337 "num_base_bdevs_discovered": 3, 00:18:12.337 "num_base_bdevs_operational": 3, 00:18:12.337 "base_bdevs_list": [ 00:18:12.337 { 00:18:12.337 "name": "pt1", 00:18:12.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.337 "is_configured": true, 00:18:12.337 "data_offset": 2048, 00:18:12.337 "data_size": 63488 00:18:12.337 }, 00:18:12.337 { 00:18:12.337 "name": "pt2", 00:18:12.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.337 "is_configured": true, 00:18:12.337 "data_offset": 2048, 00:18:12.337 "data_size": 63488 00:18:12.337 }, 00:18:12.337 { 00:18:12.337 "name": "pt3", 00:18:12.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.337 "is_configured": true, 00:18:12.337 "data_offset": 2048, 00:18:12.337 "data_size": 63488 00:18:12.337 } 00:18:12.337 ] 00:18:12.337 }' 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.337 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.905 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:12.905 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:12.905 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:12.905 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:12.905 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:12.905 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:12.905 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:12.905 15:44:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:12.905 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.905 15:44:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.905 [2024-12-06 15:44:55.983862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:12.905 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.905 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:12.905 "name": "raid_bdev1", 00:18:12.905 "aliases": [ 00:18:12.905 "ab677845-2cc9-4139-a7b8-3e85c8cbe8c1" 00:18:12.905 ], 00:18:12.905 "product_name": "Raid Volume", 00:18:12.905 "block_size": 512, 00:18:12.905 "num_blocks": 126976, 00:18:12.905 "uuid": "ab677845-2cc9-4139-a7b8-3e85c8cbe8c1", 00:18:12.905 "assigned_rate_limits": { 00:18:12.905 "rw_ios_per_sec": 0, 00:18:12.905 "rw_mbytes_per_sec": 0, 00:18:12.905 "r_mbytes_per_sec": 0, 00:18:12.905 "w_mbytes_per_sec": 0 00:18:12.905 }, 00:18:12.905 "claimed": false, 00:18:12.905 "zoned": false, 00:18:12.905 "supported_io_types": { 00:18:12.905 "read": true, 00:18:12.905 "write": true, 00:18:12.905 "unmap": false, 00:18:12.905 "flush": false, 00:18:12.905 "reset": true, 00:18:12.905 "nvme_admin": false, 00:18:12.905 "nvme_io": false, 00:18:12.905 "nvme_io_md": false, 00:18:12.905 "write_zeroes": true, 00:18:12.905 "zcopy": false, 00:18:12.905 "get_zone_info": false, 00:18:12.905 "zone_management": false, 00:18:12.905 "zone_append": false, 00:18:12.905 "compare": false, 00:18:12.905 "compare_and_write": false, 00:18:12.905 "abort": false, 00:18:12.905 "seek_hole": false, 00:18:12.905 "seek_data": false, 00:18:12.905 "copy": false, 00:18:12.905 "nvme_iov_md": false 00:18:12.905 }, 00:18:12.905 "driver_specific": { 00:18:12.905 "raid": { 00:18:12.905 "uuid": "ab677845-2cc9-4139-a7b8-3e85c8cbe8c1", 00:18:12.905 "strip_size_kb": 64, 00:18:12.905 "state": "online", 00:18:12.905 "raid_level": "raid5f", 00:18:12.905 "superblock": true, 00:18:12.905 "num_base_bdevs": 3, 00:18:12.905 "num_base_bdevs_discovered": 3, 00:18:12.905 "num_base_bdevs_operational": 3, 00:18:12.905 "base_bdevs_list": [ 00:18:12.905 { 00:18:12.905 "name": "pt1", 00:18:12.905 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.905 "is_configured": true, 00:18:12.905 "data_offset": 2048, 00:18:12.905 "data_size": 63488 00:18:12.905 }, 00:18:12.905 { 00:18:12.905 "name": "pt2", 00:18:12.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.906 "is_configured": true, 00:18:12.906 "data_offset": 2048, 00:18:12.906 "data_size": 63488 00:18:12.906 }, 00:18:12.906 { 00:18:12.906 "name": "pt3", 00:18:12.906 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.906 "is_configured": true, 00:18:12.906 "data_offset": 2048, 00:18:12.906 "data_size": 63488 00:18:12.906 } 00:18:12.906 ] 00:18:12.906 } 00:18:12.906 } 00:18:12.906 }' 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:12.906 pt2 00:18:12.906 pt3' 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.906 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:13.165 [2024-12-06 15:44:56.219527] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ab677845-2cc9-4139-a7b8-3e85c8cbe8c1 '!=' ab677845-2cc9-4139-a7b8-3e85c8cbe8c1 ']' 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.165 [2024-12-06 15:44:56.267327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.165 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.165 "name": "raid_bdev1", 00:18:13.165 "uuid": "ab677845-2cc9-4139-a7b8-3e85c8cbe8c1", 00:18:13.165 "strip_size_kb": 64, 00:18:13.165 "state": "online", 00:18:13.165 "raid_level": "raid5f", 00:18:13.165 "superblock": true, 00:18:13.165 "num_base_bdevs": 3, 00:18:13.165 "num_base_bdevs_discovered": 2, 00:18:13.165 "num_base_bdevs_operational": 2, 00:18:13.165 "base_bdevs_list": [ 00:18:13.165 { 00:18:13.165 "name": null, 00:18:13.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.165 "is_configured": false, 00:18:13.166 "data_offset": 0, 00:18:13.166 "data_size": 63488 00:18:13.166 }, 00:18:13.166 { 00:18:13.166 "name": "pt2", 00:18:13.166 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.166 "is_configured": true, 00:18:13.166 "data_offset": 2048, 00:18:13.166 "data_size": 63488 00:18:13.166 }, 00:18:13.166 { 00:18:13.166 "name": "pt3", 00:18:13.166 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:13.166 "is_configured": true, 00:18:13.166 "data_offset": 2048, 00:18:13.166 "data_size": 63488 00:18:13.166 } 00:18:13.166 ] 00:18:13.166 }' 00:18:13.166 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.166 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.425 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:13.425 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.425 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.425 [2024-12-06 15:44:56.682777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.425 [2024-12-06 15:44:56.682820] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:13.425 [2024-12-06 15:44:56.682932] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.425 [2024-12-06 15:44:56.683007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.425 [2024-12-06 15:44:56.683028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:13.425 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.425 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.425 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:13.425 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.425 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.425 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.685 [2024-12-06 15:44:56.774686] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:13.685 [2024-12-06 15:44:56.774782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.685 [2024-12-06 15:44:56.774808] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:13.685 [2024-12-06 15:44:56.774824] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.685 [2024-12-06 15:44:56.777813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.685 [2024-12-06 15:44:56.778011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:13.685 [2024-12-06 15:44:56.778169] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:13.685 [2024-12-06 15:44:56.778234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:13.685 pt2 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.685 "name": "raid_bdev1", 00:18:13.685 "uuid": "ab677845-2cc9-4139-a7b8-3e85c8cbe8c1", 00:18:13.685 "strip_size_kb": 64, 00:18:13.685 "state": "configuring", 00:18:13.685 "raid_level": "raid5f", 00:18:13.685 "superblock": true, 00:18:13.685 "num_base_bdevs": 3, 00:18:13.685 "num_base_bdevs_discovered": 1, 00:18:13.685 "num_base_bdevs_operational": 2, 00:18:13.685 "base_bdevs_list": [ 00:18:13.685 { 00:18:13.685 "name": null, 00:18:13.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.685 "is_configured": false, 00:18:13.685 "data_offset": 2048, 00:18:13.685 "data_size": 63488 00:18:13.685 }, 00:18:13.685 { 00:18:13.685 "name": "pt2", 00:18:13.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.685 "is_configured": true, 00:18:13.685 "data_offset": 2048, 00:18:13.685 "data_size": 63488 00:18:13.685 }, 00:18:13.685 { 00:18:13.685 "name": null, 00:18:13.685 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:13.685 "is_configured": false, 00:18:13.685 "data_offset": 2048, 00:18:13.685 "data_size": 63488 00:18:13.685 } 00:18:13.685 ] 00:18:13.685 }' 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.685 15:44:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.945 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:13.945 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:13.945 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:18:13.945 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:13.945 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.945 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.204 [2024-12-06 15:44:57.242175] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:14.205 [2024-12-06 15:44:57.242272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.205 [2024-12-06 15:44:57.242300] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:14.205 [2024-12-06 15:44:57.242315] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.205 [2024-12-06 15:44:57.242953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.205 [2024-12-06 15:44:57.242988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:14.205 [2024-12-06 15:44:57.243093] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:14.205 [2024-12-06 15:44:57.243129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:14.205 [2024-12-06 15:44:57.243271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:14.205 [2024-12-06 15:44:57.243286] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:14.205 [2024-12-06 15:44:57.243601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:14.205 [2024-12-06 15:44:57.249189] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:14.205 [2024-12-06 15:44:57.249346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:14.205 [2024-12-06 15:44:57.249774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.205 pt3 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.205 "name": "raid_bdev1", 00:18:14.205 "uuid": "ab677845-2cc9-4139-a7b8-3e85c8cbe8c1", 00:18:14.205 "strip_size_kb": 64, 00:18:14.205 "state": "online", 00:18:14.205 "raid_level": "raid5f", 00:18:14.205 "superblock": true, 00:18:14.205 "num_base_bdevs": 3, 00:18:14.205 "num_base_bdevs_discovered": 2, 00:18:14.205 "num_base_bdevs_operational": 2, 00:18:14.205 "base_bdevs_list": [ 00:18:14.205 { 00:18:14.205 "name": null, 00:18:14.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.205 "is_configured": false, 00:18:14.205 "data_offset": 2048, 00:18:14.205 "data_size": 63488 00:18:14.205 }, 00:18:14.205 { 00:18:14.205 "name": "pt2", 00:18:14.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.205 "is_configured": true, 00:18:14.205 "data_offset": 2048, 00:18:14.205 "data_size": 63488 00:18:14.205 }, 00:18:14.205 { 00:18:14.205 "name": "pt3", 00:18:14.205 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:14.205 "is_configured": true, 00:18:14.205 "data_offset": 2048, 00:18:14.205 "data_size": 63488 00:18:14.205 } 00:18:14.205 ] 00:18:14.205 }' 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.205 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.464 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:14.464 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.464 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.464 [2024-12-06 15:44:57.648551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.464 [2024-12-06 15:44:57.648594] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.464 [2024-12-06 15:44:57.648699] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.464 [2024-12-06 15:44:57.648780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.464 [2024-12-06 15:44:57.648793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:14.464 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.464 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.464 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:14.464 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.465 [2024-12-06 15:44:57.712461] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:14.465 [2024-12-06 15:44:57.712552] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.465 [2024-12-06 15:44:57.712580] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:14.465 [2024-12-06 15:44:57.712592] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.465 [2024-12-06 15:44:57.715620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.465 [2024-12-06 15:44:57.715662] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:14.465 [2024-12-06 15:44:57.715759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:14.465 [2024-12-06 15:44:57.715816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:14.465 [2024-12-06 15:44:57.715996] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:14.465 [2024-12-06 15:44:57.716011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.465 [2024-12-06 15:44:57.716032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:14.465 [2024-12-06 15:44:57.716107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:14.465 pt1 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.465 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.723 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.723 "name": "raid_bdev1", 00:18:14.723 "uuid": "ab677845-2cc9-4139-a7b8-3e85c8cbe8c1", 00:18:14.723 "strip_size_kb": 64, 00:18:14.723 "state": "configuring", 00:18:14.723 "raid_level": "raid5f", 00:18:14.723 "superblock": true, 00:18:14.723 "num_base_bdevs": 3, 00:18:14.723 "num_base_bdevs_discovered": 1, 00:18:14.723 "num_base_bdevs_operational": 2, 00:18:14.723 "base_bdevs_list": [ 00:18:14.723 { 00:18:14.723 "name": null, 00:18:14.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.723 "is_configured": false, 00:18:14.723 "data_offset": 2048, 00:18:14.723 "data_size": 63488 00:18:14.723 }, 00:18:14.723 { 00:18:14.723 "name": "pt2", 00:18:14.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.723 "is_configured": true, 00:18:14.723 "data_offset": 2048, 00:18:14.723 "data_size": 63488 00:18:14.723 }, 00:18:14.723 { 00:18:14.723 "name": null, 00:18:14.723 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:14.723 "is_configured": false, 00:18:14.723 "data_offset": 2048, 00:18:14.723 "data_size": 63488 00:18:14.723 } 00:18:14.723 ] 00:18:14.723 }' 00:18:14.723 15:44:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.724 15:44:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.983 [2024-12-06 15:44:58.171815] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:14.983 [2024-12-06 15:44:58.172432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.983 [2024-12-06 15:44:58.172483] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:14.983 [2024-12-06 15:44:58.172498] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.983 [2024-12-06 15:44:58.173159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.983 [2024-12-06 15:44:58.173193] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:14.983 [2024-12-06 15:44:58.173301] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:14.983 [2024-12-06 15:44:58.173337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:14.983 [2024-12-06 15:44:58.173486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:14.983 [2024-12-06 15:44:58.173498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:14.983 [2024-12-06 15:44:58.173836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:14.983 [2024-12-06 15:44:58.180186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:14.983 [2024-12-06 15:44:58.180219] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:14.983 pt3 00:18:14.983 [2024-12-06 15:44:58.180537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.983 "name": "raid_bdev1", 00:18:14.983 "uuid": "ab677845-2cc9-4139-a7b8-3e85c8cbe8c1", 00:18:14.983 "strip_size_kb": 64, 00:18:14.983 "state": "online", 00:18:14.983 "raid_level": "raid5f", 00:18:14.983 "superblock": true, 00:18:14.983 "num_base_bdevs": 3, 00:18:14.983 "num_base_bdevs_discovered": 2, 00:18:14.983 "num_base_bdevs_operational": 2, 00:18:14.983 "base_bdevs_list": [ 00:18:14.983 { 00:18:14.983 "name": null, 00:18:14.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.983 "is_configured": false, 00:18:14.983 "data_offset": 2048, 00:18:14.983 "data_size": 63488 00:18:14.983 }, 00:18:14.983 { 00:18:14.983 "name": "pt2", 00:18:14.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.983 "is_configured": true, 00:18:14.983 "data_offset": 2048, 00:18:14.983 "data_size": 63488 00:18:14.983 }, 00:18:14.983 { 00:18:14.983 "name": "pt3", 00:18:14.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:14.983 "is_configured": true, 00:18:14.983 "data_offset": 2048, 00:18:14.983 "data_size": 63488 00:18:14.983 } 00:18:14.983 ] 00:18:14.983 }' 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.983 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:15.553 [2024-12-06 15:44:58.607862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ab677845-2cc9-4139-a7b8-3e85c8cbe8c1 '!=' ab677845-2cc9-4139-a7b8-3e85c8cbe8c1 ']' 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81146 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81146 ']' 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81146 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81146 00:18:15.553 killing process with pid 81146 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81146' 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81146 00:18:15.553 [2024-12-06 15:44:58.697550] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.553 [2024-12-06 15:44:58.697670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.553 15:44:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81146 00:18:15.553 [2024-12-06 15:44:58.697747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.553 [2024-12-06 15:44:58.697764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:15.812 [2024-12-06 15:44:59.031158] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.192 15:45:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:17.192 00:18:17.192 real 0m7.613s 00:18:17.192 user 0m11.583s 00:18:17.192 sys 0m1.653s 00:18:17.192 ************************************ 00:18:17.192 END TEST raid5f_superblock_test 00:18:17.192 ************************************ 00:18:17.192 15:45:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.192 15:45:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.192 15:45:00 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:17.192 15:45:00 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:18:17.192 15:45:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:17.192 15:45:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.192 15:45:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.192 ************************************ 00:18:17.192 START TEST raid5f_rebuild_test 00:18:17.192 ************************************ 00:18:17.192 15:45:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:18:17.192 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:17.192 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:17.192 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:17.192 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:17.192 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:17.192 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:17.192 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:17.192 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:17.192 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:17.192 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81591 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81591 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81591 ']' 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.193 15:45:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.193 [2024-12-06 15:45:00.472841] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:18:17.193 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:17.193 Zero copy mechanism will not be used. 00:18:17.193 [2024-12-06 15:45:00.473210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81591 ] 00:18:17.453 [2024-12-06 15:45:00.663476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.711 [2024-12-06 15:45:00.811876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.970 [2024-12-06 15:45:01.042276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.970 [2024-12-06 15:45:01.042317] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.228 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.228 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:18.228 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.229 BaseBdev1_malloc 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.229 [2024-12-06 15:45:01.386465] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:18.229 [2024-12-06 15:45:01.386562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.229 [2024-12-06 15:45:01.386591] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:18.229 [2024-12-06 15:45:01.386608] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.229 [2024-12-06 15:45:01.389293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.229 [2024-12-06 15:45:01.389344] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:18.229 BaseBdev1 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.229 BaseBdev2_malloc 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.229 [2024-12-06 15:45:01.452629] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:18.229 [2024-12-06 15:45:01.452842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.229 [2024-12-06 15:45:01.452880] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:18.229 [2024-12-06 15:45:01.452898] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.229 [2024-12-06 15:45:01.455710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.229 [2024-12-06 15:45:01.455755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:18.229 BaseBdev2 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.229 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.488 BaseBdev3_malloc 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.488 [2024-12-06 15:45:01.529572] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:18.488 [2024-12-06 15:45:01.529631] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.488 [2024-12-06 15:45:01.529658] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:18.488 [2024-12-06 15:45:01.529673] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.488 [2024-12-06 15:45:01.532392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.488 [2024-12-06 15:45:01.532436] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:18.488 BaseBdev3 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.488 spare_malloc 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.488 spare_delay 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.488 [2024-12-06 15:45:01.605077] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:18.488 [2024-12-06 15:45:01.605138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.488 [2024-12-06 15:45:01.605161] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:18.488 [2024-12-06 15:45:01.605175] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.488 [2024-12-06 15:45:01.607862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.488 [2024-12-06 15:45:01.607909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:18.488 spare 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.488 [2024-12-06 15:45:01.617135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:18.488 [2024-12-06 15:45:01.619513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.488 [2024-12-06 15:45:01.619584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:18.488 [2024-12-06 15:45:01.619678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:18.488 [2024-12-06 15:45:01.619691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:18.488 [2024-12-06 15:45:01.619973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:18.488 [2024-12-06 15:45:01.626385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:18.488 [2024-12-06 15:45:01.626411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:18.488 [2024-12-06 15:45:01.626643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.488 "name": "raid_bdev1", 00:18:18.488 "uuid": "7d9d25d7-a820-4862-8293-5debbad171cd", 00:18:18.488 "strip_size_kb": 64, 00:18:18.488 "state": "online", 00:18:18.488 "raid_level": "raid5f", 00:18:18.488 "superblock": false, 00:18:18.488 "num_base_bdevs": 3, 00:18:18.488 "num_base_bdevs_discovered": 3, 00:18:18.488 "num_base_bdevs_operational": 3, 00:18:18.488 "base_bdevs_list": [ 00:18:18.488 { 00:18:18.488 "name": "BaseBdev1", 00:18:18.488 "uuid": "630fb5a4-edb7-582e-a276-69accf66fb60", 00:18:18.488 "is_configured": true, 00:18:18.488 "data_offset": 0, 00:18:18.488 "data_size": 65536 00:18:18.488 }, 00:18:18.488 { 00:18:18.488 "name": "BaseBdev2", 00:18:18.488 "uuid": "f658245f-0060-5ec1-beb1-42d29c52d785", 00:18:18.488 "is_configured": true, 00:18:18.488 "data_offset": 0, 00:18:18.488 "data_size": 65536 00:18:18.488 }, 00:18:18.488 { 00:18:18.488 "name": "BaseBdev3", 00:18:18.488 "uuid": "4ac6ee62-7f83-58d2-9cab-0d2301231e9d", 00:18:18.488 "is_configured": true, 00:18:18.488 "data_offset": 0, 00:18:18.488 "data_size": 65536 00:18:18.488 } 00:18:18.488 ] 00:18:18.488 }' 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.488 15:45:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.056 [2024-12-06 15:45:02.065867] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:19.056 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:19.056 [2024-12-06 15:45:02.325311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:19.056 /dev/nbd0 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.316 1+0 records in 00:18:19.316 1+0 records out 00:18:19.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260661 s, 15.7 MB/s 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.316 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:19.317 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:19.317 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:19.317 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:19.317 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:18:19.576 512+0 records in 00:18:19.576 512+0 records out 00:18:19.576 67108864 bytes (67 MB, 64 MiB) copied, 0.401993 s, 167 MB/s 00:18:19.576 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:19.576 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:19.576 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:19.576 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:19.576 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:19.576 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.576 15:45:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:19.841 [2024-12-06 15:45:03.030271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.841 [2024-12-06 15:45:03.046052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.841 "name": "raid_bdev1", 00:18:19.841 "uuid": "7d9d25d7-a820-4862-8293-5debbad171cd", 00:18:19.841 "strip_size_kb": 64, 00:18:19.841 "state": "online", 00:18:19.841 "raid_level": "raid5f", 00:18:19.841 "superblock": false, 00:18:19.841 "num_base_bdevs": 3, 00:18:19.841 "num_base_bdevs_discovered": 2, 00:18:19.841 "num_base_bdevs_operational": 2, 00:18:19.841 "base_bdevs_list": [ 00:18:19.841 { 00:18:19.841 "name": null, 00:18:19.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.841 "is_configured": false, 00:18:19.841 "data_offset": 0, 00:18:19.841 "data_size": 65536 00:18:19.841 }, 00:18:19.841 { 00:18:19.841 "name": "BaseBdev2", 00:18:19.841 "uuid": "f658245f-0060-5ec1-beb1-42d29c52d785", 00:18:19.841 "is_configured": true, 00:18:19.841 "data_offset": 0, 00:18:19.841 "data_size": 65536 00:18:19.841 }, 00:18:19.841 { 00:18:19.841 "name": "BaseBdev3", 00:18:19.841 "uuid": "4ac6ee62-7f83-58d2-9cab-0d2301231e9d", 00:18:19.841 "is_configured": true, 00:18:19.841 "data_offset": 0, 00:18:19.841 "data_size": 65536 00:18:19.841 } 00:18:19.841 ] 00:18:19.841 }' 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.841 15:45:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.471 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:20.471 15:45:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.471 15:45:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.471 [2024-12-06 15:45:03.473693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.471 [2024-12-06 15:45:03.493606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:18:20.471 15:45:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.471 15:45:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:20.471 [2024-12-06 15:45:03.502019] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.411 "name": "raid_bdev1", 00:18:21.411 "uuid": "7d9d25d7-a820-4862-8293-5debbad171cd", 00:18:21.411 "strip_size_kb": 64, 00:18:21.411 "state": "online", 00:18:21.411 "raid_level": "raid5f", 00:18:21.411 "superblock": false, 00:18:21.411 "num_base_bdevs": 3, 00:18:21.411 "num_base_bdevs_discovered": 3, 00:18:21.411 "num_base_bdevs_operational": 3, 00:18:21.411 "process": { 00:18:21.411 "type": "rebuild", 00:18:21.411 "target": "spare", 00:18:21.411 "progress": { 00:18:21.411 "blocks": 20480, 00:18:21.411 "percent": 15 00:18:21.411 } 00:18:21.411 }, 00:18:21.411 "base_bdevs_list": [ 00:18:21.411 { 00:18:21.411 "name": "spare", 00:18:21.411 "uuid": "806e69c9-e5c8-51e7-9396-30b9c071f0c3", 00:18:21.411 "is_configured": true, 00:18:21.411 "data_offset": 0, 00:18:21.411 "data_size": 65536 00:18:21.411 }, 00:18:21.411 { 00:18:21.411 "name": "BaseBdev2", 00:18:21.411 "uuid": "f658245f-0060-5ec1-beb1-42d29c52d785", 00:18:21.411 "is_configured": true, 00:18:21.411 "data_offset": 0, 00:18:21.411 "data_size": 65536 00:18:21.411 }, 00:18:21.411 { 00:18:21.411 "name": "BaseBdev3", 00:18:21.411 "uuid": "4ac6ee62-7f83-58d2-9cab-0d2301231e9d", 00:18:21.411 "is_configured": true, 00:18:21.411 "data_offset": 0, 00:18:21.411 "data_size": 65536 00:18:21.411 } 00:18:21.411 ] 00:18:21.411 }' 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.411 15:45:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.411 [2024-12-06 15:45:04.662093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.671 [2024-12-06 15:45:04.712973] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:21.671 [2024-12-06 15:45:04.713047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.671 [2024-12-06 15:45:04.713071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.671 [2024-12-06 15:45:04.713082] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.671 "name": "raid_bdev1", 00:18:21.671 "uuid": "7d9d25d7-a820-4862-8293-5debbad171cd", 00:18:21.671 "strip_size_kb": 64, 00:18:21.671 "state": "online", 00:18:21.671 "raid_level": "raid5f", 00:18:21.671 "superblock": false, 00:18:21.671 "num_base_bdevs": 3, 00:18:21.671 "num_base_bdevs_discovered": 2, 00:18:21.671 "num_base_bdevs_operational": 2, 00:18:21.671 "base_bdevs_list": [ 00:18:21.671 { 00:18:21.671 "name": null, 00:18:21.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.671 "is_configured": false, 00:18:21.671 "data_offset": 0, 00:18:21.671 "data_size": 65536 00:18:21.671 }, 00:18:21.671 { 00:18:21.671 "name": "BaseBdev2", 00:18:21.671 "uuid": "f658245f-0060-5ec1-beb1-42d29c52d785", 00:18:21.671 "is_configured": true, 00:18:21.671 "data_offset": 0, 00:18:21.671 "data_size": 65536 00:18:21.671 }, 00:18:21.671 { 00:18:21.671 "name": "BaseBdev3", 00:18:21.671 "uuid": "4ac6ee62-7f83-58d2-9cab-0d2301231e9d", 00:18:21.671 "is_configured": true, 00:18:21.671 "data_offset": 0, 00:18:21.671 "data_size": 65536 00:18:21.671 } 00:18:21.671 ] 00:18:21.671 }' 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.671 15:45:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.931 15:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:21.931 15:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.931 15:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:21.931 15:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:21.931 15:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.931 15:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.931 15:45:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.931 15:45:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.931 15:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.189 15:45:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.189 15:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.189 "name": "raid_bdev1", 00:18:22.189 "uuid": "7d9d25d7-a820-4862-8293-5debbad171cd", 00:18:22.189 "strip_size_kb": 64, 00:18:22.189 "state": "online", 00:18:22.189 "raid_level": "raid5f", 00:18:22.189 "superblock": false, 00:18:22.189 "num_base_bdevs": 3, 00:18:22.189 "num_base_bdevs_discovered": 2, 00:18:22.189 "num_base_bdevs_operational": 2, 00:18:22.189 "base_bdevs_list": [ 00:18:22.189 { 00:18:22.189 "name": null, 00:18:22.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.189 "is_configured": false, 00:18:22.189 "data_offset": 0, 00:18:22.189 "data_size": 65536 00:18:22.189 }, 00:18:22.189 { 00:18:22.189 "name": "BaseBdev2", 00:18:22.189 "uuid": "f658245f-0060-5ec1-beb1-42d29c52d785", 00:18:22.189 "is_configured": true, 00:18:22.189 "data_offset": 0, 00:18:22.189 "data_size": 65536 00:18:22.189 }, 00:18:22.189 { 00:18:22.189 "name": "BaseBdev3", 00:18:22.189 "uuid": "4ac6ee62-7f83-58d2-9cab-0d2301231e9d", 00:18:22.189 "is_configured": true, 00:18:22.189 "data_offset": 0, 00:18:22.189 "data_size": 65536 00:18:22.189 } 00:18:22.189 ] 00:18:22.189 }' 00:18:22.189 15:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.189 15:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:22.189 15:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.189 15:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:22.189 15:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:22.189 15:45:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.189 15:45:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.189 [2024-12-06 15:45:05.345271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:22.189 [2024-12-06 15:45:05.362146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:22.189 15:45:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.189 15:45:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:22.189 [2024-12-06 15:45:05.370730] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:23.124 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.124 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.124 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.124 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.124 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.124 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.124 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.124 15:45:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.124 15:45:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.124 15:45:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.384 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.384 "name": "raid_bdev1", 00:18:23.384 "uuid": "7d9d25d7-a820-4862-8293-5debbad171cd", 00:18:23.384 "strip_size_kb": 64, 00:18:23.384 "state": "online", 00:18:23.384 "raid_level": "raid5f", 00:18:23.384 "superblock": false, 00:18:23.384 "num_base_bdevs": 3, 00:18:23.384 "num_base_bdevs_discovered": 3, 00:18:23.384 "num_base_bdevs_operational": 3, 00:18:23.384 "process": { 00:18:23.384 "type": "rebuild", 00:18:23.384 "target": "spare", 00:18:23.384 "progress": { 00:18:23.385 "blocks": 20480, 00:18:23.385 "percent": 15 00:18:23.385 } 00:18:23.385 }, 00:18:23.385 "base_bdevs_list": [ 00:18:23.385 { 00:18:23.385 "name": "spare", 00:18:23.385 "uuid": "806e69c9-e5c8-51e7-9396-30b9c071f0c3", 00:18:23.385 "is_configured": true, 00:18:23.385 "data_offset": 0, 00:18:23.385 "data_size": 65536 00:18:23.385 }, 00:18:23.385 { 00:18:23.385 "name": "BaseBdev2", 00:18:23.385 "uuid": "f658245f-0060-5ec1-beb1-42d29c52d785", 00:18:23.385 "is_configured": true, 00:18:23.385 "data_offset": 0, 00:18:23.385 "data_size": 65536 00:18:23.385 }, 00:18:23.385 { 00:18:23.385 "name": "BaseBdev3", 00:18:23.385 "uuid": "4ac6ee62-7f83-58d2-9cab-0d2301231e9d", 00:18:23.385 "is_configured": true, 00:18:23.385 "data_offset": 0, 00:18:23.385 "data_size": 65536 00:18:23.385 } 00:18:23.385 ] 00:18:23.385 }' 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=554 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.385 "name": "raid_bdev1", 00:18:23.385 "uuid": "7d9d25d7-a820-4862-8293-5debbad171cd", 00:18:23.385 "strip_size_kb": 64, 00:18:23.385 "state": "online", 00:18:23.385 "raid_level": "raid5f", 00:18:23.385 "superblock": false, 00:18:23.385 "num_base_bdevs": 3, 00:18:23.385 "num_base_bdevs_discovered": 3, 00:18:23.385 "num_base_bdevs_operational": 3, 00:18:23.385 "process": { 00:18:23.385 "type": "rebuild", 00:18:23.385 "target": "spare", 00:18:23.385 "progress": { 00:18:23.385 "blocks": 22528, 00:18:23.385 "percent": 17 00:18:23.385 } 00:18:23.385 }, 00:18:23.385 "base_bdevs_list": [ 00:18:23.385 { 00:18:23.385 "name": "spare", 00:18:23.385 "uuid": "806e69c9-e5c8-51e7-9396-30b9c071f0c3", 00:18:23.385 "is_configured": true, 00:18:23.385 "data_offset": 0, 00:18:23.385 "data_size": 65536 00:18:23.385 }, 00:18:23.385 { 00:18:23.385 "name": "BaseBdev2", 00:18:23.385 "uuid": "f658245f-0060-5ec1-beb1-42d29c52d785", 00:18:23.385 "is_configured": true, 00:18:23.385 "data_offset": 0, 00:18:23.385 "data_size": 65536 00:18:23.385 }, 00:18:23.385 { 00:18:23.385 "name": "BaseBdev3", 00:18:23.385 "uuid": "4ac6ee62-7f83-58d2-9cab-0d2301231e9d", 00:18:23.385 "is_configured": true, 00:18:23.385 "data_offset": 0, 00:18:23.385 "data_size": 65536 00:18:23.385 } 00:18:23.385 ] 00:18:23.385 }' 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.385 15:45:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.764 "name": "raid_bdev1", 00:18:24.764 "uuid": "7d9d25d7-a820-4862-8293-5debbad171cd", 00:18:24.764 "strip_size_kb": 64, 00:18:24.764 "state": "online", 00:18:24.764 "raid_level": "raid5f", 00:18:24.764 "superblock": false, 00:18:24.764 "num_base_bdevs": 3, 00:18:24.764 "num_base_bdevs_discovered": 3, 00:18:24.764 "num_base_bdevs_operational": 3, 00:18:24.764 "process": { 00:18:24.764 "type": "rebuild", 00:18:24.764 "target": "spare", 00:18:24.764 "progress": { 00:18:24.764 "blocks": 45056, 00:18:24.764 "percent": 34 00:18:24.764 } 00:18:24.764 }, 00:18:24.764 "base_bdevs_list": [ 00:18:24.764 { 00:18:24.764 "name": "spare", 00:18:24.764 "uuid": "806e69c9-e5c8-51e7-9396-30b9c071f0c3", 00:18:24.764 "is_configured": true, 00:18:24.764 "data_offset": 0, 00:18:24.764 "data_size": 65536 00:18:24.764 }, 00:18:24.764 { 00:18:24.764 "name": "BaseBdev2", 00:18:24.764 "uuid": "f658245f-0060-5ec1-beb1-42d29c52d785", 00:18:24.764 "is_configured": true, 00:18:24.764 "data_offset": 0, 00:18:24.764 "data_size": 65536 00:18:24.764 }, 00:18:24.764 { 00:18:24.764 "name": "BaseBdev3", 00:18:24.764 "uuid": "4ac6ee62-7f83-58d2-9cab-0d2301231e9d", 00:18:24.764 "is_configured": true, 00:18:24.764 "data_offset": 0, 00:18:24.764 "data_size": 65536 00:18:24.764 } 00:18:24.764 ] 00:18:24.764 }' 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.764 15:45:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.703 "name": "raid_bdev1", 00:18:25.703 "uuid": "7d9d25d7-a820-4862-8293-5debbad171cd", 00:18:25.703 "strip_size_kb": 64, 00:18:25.703 "state": "online", 00:18:25.703 "raid_level": "raid5f", 00:18:25.703 "superblock": false, 00:18:25.703 "num_base_bdevs": 3, 00:18:25.703 "num_base_bdevs_discovered": 3, 00:18:25.703 "num_base_bdevs_operational": 3, 00:18:25.703 "process": { 00:18:25.703 "type": "rebuild", 00:18:25.703 "target": "spare", 00:18:25.703 "progress": { 00:18:25.703 "blocks": 69632, 00:18:25.703 "percent": 53 00:18:25.703 } 00:18:25.703 }, 00:18:25.703 "base_bdevs_list": [ 00:18:25.703 { 00:18:25.703 "name": "spare", 00:18:25.703 "uuid": "806e69c9-e5c8-51e7-9396-30b9c071f0c3", 00:18:25.703 "is_configured": true, 00:18:25.703 "data_offset": 0, 00:18:25.703 "data_size": 65536 00:18:25.703 }, 00:18:25.703 { 00:18:25.703 "name": "BaseBdev2", 00:18:25.703 "uuid": "f658245f-0060-5ec1-beb1-42d29c52d785", 00:18:25.703 "is_configured": true, 00:18:25.703 "data_offset": 0, 00:18:25.703 "data_size": 65536 00:18:25.703 }, 00:18:25.703 { 00:18:25.703 "name": "BaseBdev3", 00:18:25.703 "uuid": "4ac6ee62-7f83-58d2-9cab-0d2301231e9d", 00:18:25.703 "is_configured": true, 00:18:25.703 "data_offset": 0, 00:18:25.703 "data_size": 65536 00:18:25.703 } 00:18:25.703 ] 00:18:25.703 }' 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.703 15:45:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:26.643 15:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:26.643 15:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.643 15:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.643 15:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.643 15:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.643 15:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.643 15:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.643 15:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.643 15:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.643 15:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.903 15:45:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.903 15:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.903 "name": "raid_bdev1", 00:18:26.903 "uuid": "7d9d25d7-a820-4862-8293-5debbad171cd", 00:18:26.903 "strip_size_kb": 64, 00:18:26.903 "state": "online", 00:18:26.903 "raid_level": "raid5f", 00:18:26.903 "superblock": false, 00:18:26.903 "num_base_bdevs": 3, 00:18:26.903 "num_base_bdevs_discovered": 3, 00:18:26.903 "num_base_bdevs_operational": 3, 00:18:26.903 "process": { 00:18:26.903 "type": "rebuild", 00:18:26.903 "target": "spare", 00:18:26.903 "progress": { 00:18:26.903 "blocks": 92160, 00:18:26.903 "percent": 70 00:18:26.903 } 00:18:26.903 }, 00:18:26.903 "base_bdevs_list": [ 00:18:26.903 { 00:18:26.903 "name": "spare", 00:18:26.903 "uuid": "806e69c9-e5c8-51e7-9396-30b9c071f0c3", 00:18:26.903 "is_configured": true, 00:18:26.903 "data_offset": 0, 00:18:26.903 "data_size": 65536 00:18:26.903 }, 00:18:26.903 { 00:18:26.903 "name": "BaseBdev2", 00:18:26.903 "uuid": "f658245f-0060-5ec1-beb1-42d29c52d785", 00:18:26.903 "is_configured": true, 00:18:26.903 "data_offset": 0, 00:18:26.903 "data_size": 65536 00:18:26.903 }, 00:18:26.903 { 00:18:26.903 "name": "BaseBdev3", 00:18:26.903 "uuid": "4ac6ee62-7f83-58d2-9cab-0d2301231e9d", 00:18:26.903 "is_configured": true, 00:18:26.903 "data_offset": 0, 00:18:26.903 "data_size": 65536 00:18:26.903 } 00:18:26.903 ] 00:18:26.903 }' 00:18:26.903 15:45:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.903 15:45:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:26.903 15:45:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.903 15:45:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.903 15:45:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:27.844 15:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:27.844 15:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.844 15:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.844 15:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.844 15:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.844 15:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.844 15:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.844 15:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.844 15:45:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.844 15:45:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.844 15:45:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.844 15:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.844 "name": "raid_bdev1", 00:18:27.844 "uuid": "7d9d25d7-a820-4862-8293-5debbad171cd", 00:18:27.844 "strip_size_kb": 64, 00:18:27.844 "state": "online", 00:18:27.844 "raid_level": "raid5f", 00:18:27.844 "superblock": false, 00:18:27.844 "num_base_bdevs": 3, 00:18:27.844 "num_base_bdevs_discovered": 3, 00:18:27.844 "num_base_bdevs_operational": 3, 00:18:27.844 "process": { 00:18:27.844 "type": "rebuild", 00:18:27.844 "target": "spare", 00:18:27.844 "progress": { 00:18:27.844 "blocks": 114688, 00:18:27.844 "percent": 87 00:18:27.844 } 00:18:27.844 }, 00:18:27.844 "base_bdevs_list": [ 00:18:27.844 { 00:18:27.844 "name": "spare", 00:18:27.844 "uuid": "806e69c9-e5c8-51e7-9396-30b9c071f0c3", 00:18:27.844 "is_configured": true, 00:18:27.844 "data_offset": 0, 00:18:27.844 "data_size": 65536 00:18:27.844 }, 00:18:27.844 { 00:18:27.844 "name": "BaseBdev2", 00:18:27.844 "uuid": "f658245f-0060-5ec1-beb1-42d29c52d785", 00:18:27.844 "is_configured": true, 00:18:27.844 "data_offset": 0, 00:18:27.844 "data_size": 65536 00:18:27.844 }, 00:18:27.844 { 00:18:27.844 "name": "BaseBdev3", 00:18:27.844 "uuid": "4ac6ee62-7f83-58d2-9cab-0d2301231e9d", 00:18:27.844 "is_configured": true, 00:18:27.844 "data_offset": 0, 00:18:27.844 "data_size": 65536 00:18:27.844 } 00:18:27.844 ] 00:18:27.844 }' 00:18:27.844 15:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.103 15:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.103 15:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.103 15:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.103 15:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:28.668 [2024-12-06 15:45:11.824208] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:28.668 [2024-12-06 15:45:11.824309] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:28.668 [2024-12-06 15:45:11.824363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.926 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:28.926 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.926 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.926 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.926 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.926 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.926 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.926 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.926 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.926 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.184 "name": "raid_bdev1", 00:18:29.184 "uuid": "7d9d25d7-a820-4862-8293-5debbad171cd", 00:18:29.184 "strip_size_kb": 64, 00:18:29.184 "state": "online", 00:18:29.184 "raid_level": "raid5f", 00:18:29.184 "superblock": false, 00:18:29.184 "num_base_bdevs": 3, 00:18:29.184 "num_base_bdevs_discovered": 3, 00:18:29.184 "num_base_bdevs_operational": 3, 00:18:29.184 "base_bdevs_list": [ 00:18:29.184 { 00:18:29.184 "name": "spare", 00:18:29.184 "uuid": "806e69c9-e5c8-51e7-9396-30b9c071f0c3", 00:18:29.184 "is_configured": true, 00:18:29.184 "data_offset": 0, 00:18:29.184 "data_size": 65536 00:18:29.184 }, 00:18:29.184 { 00:18:29.184 "name": "BaseBdev2", 00:18:29.184 "uuid": "f658245f-0060-5ec1-beb1-42d29c52d785", 00:18:29.184 "is_configured": true, 00:18:29.184 "data_offset": 0, 00:18:29.184 "data_size": 65536 00:18:29.184 }, 00:18:29.184 { 00:18:29.184 "name": "BaseBdev3", 00:18:29.184 "uuid": "4ac6ee62-7f83-58d2-9cab-0d2301231e9d", 00:18:29.184 "is_configured": true, 00:18:29.184 "data_offset": 0, 00:18:29.184 "data_size": 65536 00:18:29.184 } 00:18:29.184 ] 00:18:29.184 }' 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.184 "name": "raid_bdev1", 00:18:29.184 "uuid": "7d9d25d7-a820-4862-8293-5debbad171cd", 00:18:29.184 "strip_size_kb": 64, 00:18:29.184 "state": "online", 00:18:29.184 "raid_level": "raid5f", 00:18:29.184 "superblock": false, 00:18:29.184 "num_base_bdevs": 3, 00:18:29.184 "num_base_bdevs_discovered": 3, 00:18:29.184 "num_base_bdevs_operational": 3, 00:18:29.184 "base_bdevs_list": [ 00:18:29.184 { 00:18:29.184 "name": "spare", 00:18:29.184 "uuid": "806e69c9-e5c8-51e7-9396-30b9c071f0c3", 00:18:29.184 "is_configured": true, 00:18:29.184 "data_offset": 0, 00:18:29.184 "data_size": 65536 00:18:29.184 }, 00:18:29.184 { 00:18:29.184 "name": "BaseBdev2", 00:18:29.184 "uuid": "f658245f-0060-5ec1-beb1-42d29c52d785", 00:18:29.184 "is_configured": true, 00:18:29.184 "data_offset": 0, 00:18:29.184 "data_size": 65536 00:18:29.184 }, 00:18:29.184 { 00:18:29.184 "name": "BaseBdev3", 00:18:29.184 "uuid": "4ac6ee62-7f83-58d2-9cab-0d2301231e9d", 00:18:29.184 "is_configured": true, 00:18:29.184 "data_offset": 0, 00:18:29.184 "data_size": 65536 00:18:29.184 } 00:18:29.184 ] 00:18:29.184 }' 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.184 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.442 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.442 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.442 "name": "raid_bdev1", 00:18:29.442 "uuid": "7d9d25d7-a820-4862-8293-5debbad171cd", 00:18:29.442 "strip_size_kb": 64, 00:18:29.442 "state": "online", 00:18:29.442 "raid_level": "raid5f", 00:18:29.442 "superblock": false, 00:18:29.442 "num_base_bdevs": 3, 00:18:29.442 "num_base_bdevs_discovered": 3, 00:18:29.442 "num_base_bdevs_operational": 3, 00:18:29.442 "base_bdevs_list": [ 00:18:29.442 { 00:18:29.442 "name": "spare", 00:18:29.442 "uuid": "806e69c9-e5c8-51e7-9396-30b9c071f0c3", 00:18:29.442 "is_configured": true, 00:18:29.442 "data_offset": 0, 00:18:29.442 "data_size": 65536 00:18:29.442 }, 00:18:29.442 { 00:18:29.442 "name": "BaseBdev2", 00:18:29.442 "uuid": "f658245f-0060-5ec1-beb1-42d29c52d785", 00:18:29.442 "is_configured": true, 00:18:29.442 "data_offset": 0, 00:18:29.442 "data_size": 65536 00:18:29.442 }, 00:18:29.442 { 00:18:29.442 "name": "BaseBdev3", 00:18:29.442 "uuid": "4ac6ee62-7f83-58d2-9cab-0d2301231e9d", 00:18:29.442 "is_configured": true, 00:18:29.442 "data_offset": 0, 00:18:29.442 "data_size": 65536 00:18:29.442 } 00:18:29.442 ] 00:18:29.442 }' 00:18:29.442 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.442 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.700 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:29.700 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.700 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.700 [2024-12-06 15:45:12.879658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:29.700 [2024-12-06 15:45:12.879821] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:29.700 [2024-12-06 15:45:12.880029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.700 [2024-12-06 15:45:12.880227] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.700 [2024-12-06 15:45:12.880345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:29.700 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.700 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.700 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.700 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:29.700 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.700 15:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.700 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:29.700 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:29.700 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:29.700 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:29.701 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:29.701 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:29.701 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:29.701 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:29.701 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:29.701 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:29.701 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:29.701 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:29.701 15:45:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:29.959 /dev/nbd0 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:29.959 1+0 records in 00:18:29.959 1+0 records out 00:18:29.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230018 s, 17.8 MB/s 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:29.959 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:30.218 /dev/nbd1 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:30.218 1+0 records in 00:18:30.218 1+0 records out 00:18:30.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295969 s, 13.8 MB/s 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:30.218 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:30.494 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:30.494 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:30.494 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:30.494 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:30.494 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:30.494 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:30.494 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:30.781 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:30.781 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:30.781 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:30.781 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:30.781 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:30.781 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:30.781 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:30.781 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:30.781 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:30.781 15:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81591 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81591 ']' 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81591 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81591 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:31.041 killing process with pid 81591 00:18:31.041 Received shutdown signal, test time was about 60.000000 seconds 00:18:31.041 00:18:31.041 Latency(us) 00:18:31.041 [2024-12-06T15:45:14.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.041 [2024-12-06T15:45:14.336Z] =================================================================================================================== 00:18:31.041 [2024-12-06T15:45:14.336Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81591' 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81591 00:18:31.041 [2024-12-06 15:45:14.157669] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:31.041 15:45:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81591 00:18:31.301 [2024-12-06 15:45:14.590977] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:32.692 ************************************ 00:18:32.692 END TEST raid5f_rebuild_test 00:18:32.692 ************************************ 00:18:32.692 15:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:32.692 00:18:32.692 real 0m15.438s 00:18:32.692 user 0m18.671s 00:18:32.692 sys 0m2.399s 00:18:32.692 15:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.692 15:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.692 15:45:15 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:18:32.692 15:45:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:32.692 15:45:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.692 15:45:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:32.692 ************************************ 00:18:32.692 START TEST raid5f_rebuild_test_sb 00:18:32.692 ************************************ 00:18:32.692 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:18:32.692 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:32.692 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:32.692 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:32.692 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:32.692 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:32.692 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:32.692 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:32.692 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:32.692 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82038 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82038 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82038 ']' 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.693 15:45:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.951 [2024-12-06 15:45:15.995329] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:18:32.951 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:32.951 Zero copy mechanism will not be used. 00:18:32.951 [2024-12-06 15:45:15.995638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82038 ] 00:18:32.951 [2024-12-06 15:45:16.182806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.209 [2024-12-06 15:45:16.317431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.467 [2024-12-06 15:45:16.557377] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.467 [2024-12-06 15:45:16.557691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.725 BaseBdev1_malloc 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.725 [2024-12-06 15:45:16.886581] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:33.725 [2024-12-06 15:45:16.886787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.725 [2024-12-06 15:45:16.886852] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:33.725 [2024-12-06 15:45:16.886999] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.725 [2024-12-06 15:45:16.889860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.725 [2024-12-06 15:45:16.890025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:33.725 BaseBdev1 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.725 BaseBdev2_malloc 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.725 [2024-12-06 15:45:16.951252] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:33.725 [2024-12-06 15:45:16.951323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.725 [2024-12-06 15:45:16.951352] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:33.725 [2024-12-06 15:45:16.951367] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.725 [2024-12-06 15:45:16.954089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.725 [2024-12-06 15:45:16.954134] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:33.725 BaseBdev2 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.725 15:45:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.984 BaseBdev3_malloc 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.984 [2024-12-06 15:45:17.029389] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:33.984 [2024-12-06 15:45:17.029580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.984 [2024-12-06 15:45:17.029645] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:33.984 [2024-12-06 15:45:17.029726] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.984 [2024-12-06 15:45:17.032420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.984 [2024-12-06 15:45:17.032570] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:33.984 BaseBdev3 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.984 spare_malloc 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.984 spare_delay 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.984 [2024-12-06 15:45:17.100177] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:33.984 [2024-12-06 15:45:17.100347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.984 [2024-12-06 15:45:17.100376] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:33.984 [2024-12-06 15:45:17.100391] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.984 [2024-12-06 15:45:17.103096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.984 [2024-12-06 15:45:17.103145] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:33.984 spare 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.984 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.984 [2024-12-06 15:45:17.112244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.984 [2024-12-06 15:45:17.114621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:33.984 [2024-12-06 15:45:17.114689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:33.984 [2024-12-06 15:45:17.114882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:33.984 [2024-12-06 15:45:17.114895] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:33.984 [2024-12-06 15:45:17.115176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:33.984 [2024-12-06 15:45:17.121079] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:33.984 [2024-12-06 15:45:17.121108] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:33.985 [2024-12-06 15:45:17.121298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.985 "name": "raid_bdev1", 00:18:33.985 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:33.985 "strip_size_kb": 64, 00:18:33.985 "state": "online", 00:18:33.985 "raid_level": "raid5f", 00:18:33.985 "superblock": true, 00:18:33.985 "num_base_bdevs": 3, 00:18:33.985 "num_base_bdevs_discovered": 3, 00:18:33.985 "num_base_bdevs_operational": 3, 00:18:33.985 "base_bdevs_list": [ 00:18:33.985 { 00:18:33.985 "name": "BaseBdev1", 00:18:33.985 "uuid": "216e1314-d703-5ba4-a66b-d61414f7edc4", 00:18:33.985 "is_configured": true, 00:18:33.985 "data_offset": 2048, 00:18:33.985 "data_size": 63488 00:18:33.985 }, 00:18:33.985 { 00:18:33.985 "name": "BaseBdev2", 00:18:33.985 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:33.985 "is_configured": true, 00:18:33.985 "data_offset": 2048, 00:18:33.985 "data_size": 63488 00:18:33.985 }, 00:18:33.985 { 00:18:33.985 "name": "BaseBdev3", 00:18:33.985 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:33.985 "is_configured": true, 00:18:33.985 "data_offset": 2048, 00:18:33.985 "data_size": 63488 00:18:33.985 } 00:18:33.985 ] 00:18:33.985 }' 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.985 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:34.552 [2024-12-06 15:45:17.580051] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:34.552 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:34.811 [2024-12-06 15:45:17.855552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:34.811 /dev/nbd0 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:34.811 1+0 records in 00:18:34.811 1+0 records out 00:18:34.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393926 s, 10.4 MB/s 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:34.811 15:45:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:18:35.069 496+0 records in 00:18:35.069 496+0 records out 00:18:35.069 65011712 bytes (65 MB, 62 MiB) copied, 0.397475 s, 164 MB/s 00:18:35.069 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:35.069 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:35.069 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:35.069 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:35.069 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:35.069 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:35.069 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:35.327 [2024-12-06 15:45:18.560167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.327 [2024-12-06 15:45:18.575912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.327 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.585 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.585 "name": "raid_bdev1", 00:18:35.585 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:35.585 "strip_size_kb": 64, 00:18:35.585 "state": "online", 00:18:35.585 "raid_level": "raid5f", 00:18:35.585 "superblock": true, 00:18:35.585 "num_base_bdevs": 3, 00:18:35.585 "num_base_bdevs_discovered": 2, 00:18:35.585 "num_base_bdevs_operational": 2, 00:18:35.585 "base_bdevs_list": [ 00:18:35.585 { 00:18:35.585 "name": null, 00:18:35.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.585 "is_configured": false, 00:18:35.585 "data_offset": 0, 00:18:35.585 "data_size": 63488 00:18:35.585 }, 00:18:35.585 { 00:18:35.585 "name": "BaseBdev2", 00:18:35.585 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:35.585 "is_configured": true, 00:18:35.585 "data_offset": 2048, 00:18:35.585 "data_size": 63488 00:18:35.585 }, 00:18:35.585 { 00:18:35.585 "name": "BaseBdev3", 00:18:35.585 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:35.585 "is_configured": true, 00:18:35.585 "data_offset": 2048, 00:18:35.585 "data_size": 63488 00:18:35.585 } 00:18:35.585 ] 00:18:35.585 }' 00:18:35.585 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.586 15:45:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.844 15:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:35.844 15:45:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.844 15:45:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.844 [2024-12-06 15:45:19.011336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:35.844 [2024-12-06 15:45:19.031528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:18:35.844 15:45:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.844 15:45:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:35.844 [2024-12-06 15:45:19.040227] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:36.779 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.779 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.779 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.779 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.779 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.779 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.779 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.779 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.779 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.779 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.038 "name": "raid_bdev1", 00:18:37.038 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:37.038 "strip_size_kb": 64, 00:18:37.038 "state": "online", 00:18:37.038 "raid_level": "raid5f", 00:18:37.038 "superblock": true, 00:18:37.038 "num_base_bdevs": 3, 00:18:37.038 "num_base_bdevs_discovered": 3, 00:18:37.038 "num_base_bdevs_operational": 3, 00:18:37.038 "process": { 00:18:37.038 "type": "rebuild", 00:18:37.038 "target": "spare", 00:18:37.038 "progress": { 00:18:37.038 "blocks": 20480, 00:18:37.038 "percent": 16 00:18:37.038 } 00:18:37.038 }, 00:18:37.038 "base_bdevs_list": [ 00:18:37.038 { 00:18:37.038 "name": "spare", 00:18:37.038 "uuid": "faac8d8a-7cd5-50b2-99a7-93bcbde0940c", 00:18:37.038 "is_configured": true, 00:18:37.038 "data_offset": 2048, 00:18:37.038 "data_size": 63488 00:18:37.038 }, 00:18:37.038 { 00:18:37.038 "name": "BaseBdev2", 00:18:37.038 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:37.038 "is_configured": true, 00:18:37.038 "data_offset": 2048, 00:18:37.038 "data_size": 63488 00:18:37.038 }, 00:18:37.038 { 00:18:37.038 "name": "BaseBdev3", 00:18:37.038 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:37.038 "is_configured": true, 00:18:37.038 "data_offset": 2048, 00:18:37.038 "data_size": 63488 00:18:37.038 } 00:18:37.038 ] 00:18:37.038 }' 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.038 [2024-12-06 15:45:20.183515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.038 [2024-12-06 15:45:20.251321] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:37.038 [2024-12-06 15:45:20.251419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.038 [2024-12-06 15:45:20.251444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.038 [2024-12-06 15:45:20.251455] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.038 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.296 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.296 "name": "raid_bdev1", 00:18:37.296 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:37.296 "strip_size_kb": 64, 00:18:37.296 "state": "online", 00:18:37.296 "raid_level": "raid5f", 00:18:37.296 "superblock": true, 00:18:37.296 "num_base_bdevs": 3, 00:18:37.296 "num_base_bdevs_discovered": 2, 00:18:37.296 "num_base_bdevs_operational": 2, 00:18:37.296 "base_bdevs_list": [ 00:18:37.296 { 00:18:37.296 "name": null, 00:18:37.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.296 "is_configured": false, 00:18:37.296 "data_offset": 0, 00:18:37.296 "data_size": 63488 00:18:37.296 }, 00:18:37.296 { 00:18:37.296 "name": "BaseBdev2", 00:18:37.296 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:37.296 "is_configured": true, 00:18:37.296 "data_offset": 2048, 00:18:37.296 "data_size": 63488 00:18:37.296 }, 00:18:37.296 { 00:18:37.296 "name": "BaseBdev3", 00:18:37.296 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:37.296 "is_configured": true, 00:18:37.296 "data_offset": 2048, 00:18:37.296 "data_size": 63488 00:18:37.296 } 00:18:37.296 ] 00:18:37.296 }' 00:18:37.296 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.296 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.555 "name": "raid_bdev1", 00:18:37.555 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:37.555 "strip_size_kb": 64, 00:18:37.555 "state": "online", 00:18:37.555 "raid_level": "raid5f", 00:18:37.555 "superblock": true, 00:18:37.555 "num_base_bdevs": 3, 00:18:37.555 "num_base_bdevs_discovered": 2, 00:18:37.555 "num_base_bdevs_operational": 2, 00:18:37.555 "base_bdevs_list": [ 00:18:37.555 { 00:18:37.555 "name": null, 00:18:37.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.555 "is_configured": false, 00:18:37.555 "data_offset": 0, 00:18:37.555 "data_size": 63488 00:18:37.555 }, 00:18:37.555 { 00:18:37.555 "name": "BaseBdev2", 00:18:37.555 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:37.555 "is_configured": true, 00:18:37.555 "data_offset": 2048, 00:18:37.555 "data_size": 63488 00:18:37.555 }, 00:18:37.555 { 00:18:37.555 "name": "BaseBdev3", 00:18:37.555 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:37.555 "is_configured": true, 00:18:37.555 "data_offset": 2048, 00:18:37.555 "data_size": 63488 00:18:37.555 } 00:18:37.555 ] 00:18:37.555 }' 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.555 [2024-12-06 15:45:20.811528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:37.555 [2024-12-06 15:45:20.829525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.555 15:45:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:37.555 [2024-12-06 15:45:20.837822] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:38.935 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.935 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.935 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.935 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.935 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.935 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.935 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.935 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.935 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.935 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.935 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.935 "name": "raid_bdev1", 00:18:38.935 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:38.935 "strip_size_kb": 64, 00:18:38.935 "state": "online", 00:18:38.935 "raid_level": "raid5f", 00:18:38.936 "superblock": true, 00:18:38.936 "num_base_bdevs": 3, 00:18:38.936 "num_base_bdevs_discovered": 3, 00:18:38.936 "num_base_bdevs_operational": 3, 00:18:38.936 "process": { 00:18:38.936 "type": "rebuild", 00:18:38.936 "target": "spare", 00:18:38.936 "progress": { 00:18:38.936 "blocks": 20480, 00:18:38.936 "percent": 16 00:18:38.936 } 00:18:38.936 }, 00:18:38.936 "base_bdevs_list": [ 00:18:38.936 { 00:18:38.936 "name": "spare", 00:18:38.936 "uuid": "faac8d8a-7cd5-50b2-99a7-93bcbde0940c", 00:18:38.936 "is_configured": true, 00:18:38.936 "data_offset": 2048, 00:18:38.936 "data_size": 63488 00:18:38.936 }, 00:18:38.936 { 00:18:38.936 "name": "BaseBdev2", 00:18:38.936 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:38.936 "is_configured": true, 00:18:38.936 "data_offset": 2048, 00:18:38.936 "data_size": 63488 00:18:38.936 }, 00:18:38.936 { 00:18:38.936 "name": "BaseBdev3", 00:18:38.936 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:38.936 "is_configured": true, 00:18:38.936 "data_offset": 2048, 00:18:38.936 "data_size": 63488 00:18:38.936 } 00:18:38.936 ] 00:18:38.936 }' 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:38.936 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=569 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.936 15:45:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.936 15:45:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.936 15:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.936 "name": "raid_bdev1", 00:18:38.936 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:38.936 "strip_size_kb": 64, 00:18:38.936 "state": "online", 00:18:38.936 "raid_level": "raid5f", 00:18:38.936 "superblock": true, 00:18:38.936 "num_base_bdevs": 3, 00:18:38.936 "num_base_bdevs_discovered": 3, 00:18:38.936 "num_base_bdevs_operational": 3, 00:18:38.936 "process": { 00:18:38.936 "type": "rebuild", 00:18:38.936 "target": "spare", 00:18:38.936 "progress": { 00:18:38.936 "blocks": 22528, 00:18:38.936 "percent": 17 00:18:38.936 } 00:18:38.936 }, 00:18:38.936 "base_bdevs_list": [ 00:18:38.936 { 00:18:38.936 "name": "spare", 00:18:38.936 "uuid": "faac8d8a-7cd5-50b2-99a7-93bcbde0940c", 00:18:38.936 "is_configured": true, 00:18:38.936 "data_offset": 2048, 00:18:38.936 "data_size": 63488 00:18:38.936 }, 00:18:38.936 { 00:18:38.936 "name": "BaseBdev2", 00:18:38.936 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:38.936 "is_configured": true, 00:18:38.936 "data_offset": 2048, 00:18:38.936 "data_size": 63488 00:18:38.936 }, 00:18:38.936 { 00:18:38.936 "name": "BaseBdev3", 00:18:38.936 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:38.936 "is_configured": true, 00:18:38.936 "data_offset": 2048, 00:18:38.936 "data_size": 63488 00:18:38.936 } 00:18:38.936 ] 00:18:38.936 }' 00:18:38.936 15:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.936 15:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.936 15:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.936 15:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.936 15:45:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:39.875 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:39.875 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.875 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.875 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.875 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.875 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.875 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.875 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.875 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.875 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.875 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.135 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.135 "name": "raid_bdev1", 00:18:40.136 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:40.136 "strip_size_kb": 64, 00:18:40.136 "state": "online", 00:18:40.136 "raid_level": "raid5f", 00:18:40.136 "superblock": true, 00:18:40.136 "num_base_bdevs": 3, 00:18:40.136 "num_base_bdevs_discovered": 3, 00:18:40.136 "num_base_bdevs_operational": 3, 00:18:40.136 "process": { 00:18:40.136 "type": "rebuild", 00:18:40.136 "target": "spare", 00:18:40.136 "progress": { 00:18:40.136 "blocks": 45056, 00:18:40.136 "percent": 35 00:18:40.136 } 00:18:40.136 }, 00:18:40.136 "base_bdevs_list": [ 00:18:40.136 { 00:18:40.136 "name": "spare", 00:18:40.136 "uuid": "faac8d8a-7cd5-50b2-99a7-93bcbde0940c", 00:18:40.136 "is_configured": true, 00:18:40.136 "data_offset": 2048, 00:18:40.136 "data_size": 63488 00:18:40.136 }, 00:18:40.136 { 00:18:40.136 "name": "BaseBdev2", 00:18:40.136 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:40.136 "is_configured": true, 00:18:40.136 "data_offset": 2048, 00:18:40.136 "data_size": 63488 00:18:40.136 }, 00:18:40.136 { 00:18:40.136 "name": "BaseBdev3", 00:18:40.136 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:40.136 "is_configured": true, 00:18:40.136 "data_offset": 2048, 00:18:40.136 "data_size": 63488 00:18:40.136 } 00:18:40.136 ] 00:18:40.136 }' 00:18:40.136 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.136 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.136 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.136 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.136 15:45:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:41.073 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.073 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.073 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.073 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.073 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.073 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.073 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.073 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.073 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.073 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.073 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.073 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.073 "name": "raid_bdev1", 00:18:41.074 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:41.074 "strip_size_kb": 64, 00:18:41.074 "state": "online", 00:18:41.074 "raid_level": "raid5f", 00:18:41.074 "superblock": true, 00:18:41.074 "num_base_bdevs": 3, 00:18:41.074 "num_base_bdevs_discovered": 3, 00:18:41.074 "num_base_bdevs_operational": 3, 00:18:41.074 "process": { 00:18:41.074 "type": "rebuild", 00:18:41.074 "target": "spare", 00:18:41.074 "progress": { 00:18:41.074 "blocks": 69632, 00:18:41.074 "percent": 54 00:18:41.074 } 00:18:41.074 }, 00:18:41.074 "base_bdevs_list": [ 00:18:41.074 { 00:18:41.074 "name": "spare", 00:18:41.074 "uuid": "faac8d8a-7cd5-50b2-99a7-93bcbde0940c", 00:18:41.074 "is_configured": true, 00:18:41.074 "data_offset": 2048, 00:18:41.074 "data_size": 63488 00:18:41.074 }, 00:18:41.074 { 00:18:41.074 "name": "BaseBdev2", 00:18:41.074 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:41.074 "is_configured": true, 00:18:41.074 "data_offset": 2048, 00:18:41.074 "data_size": 63488 00:18:41.074 }, 00:18:41.074 { 00:18:41.074 "name": "BaseBdev3", 00:18:41.074 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:41.074 "is_configured": true, 00:18:41.074 "data_offset": 2048, 00:18:41.074 "data_size": 63488 00:18:41.074 } 00:18:41.074 ] 00:18:41.074 }' 00:18:41.074 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.074 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.333 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.333 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.333 15:45:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:42.280 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.280 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.280 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.280 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.280 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.280 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.280 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.280 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.280 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.280 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.280 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.280 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.281 "name": "raid_bdev1", 00:18:42.281 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:42.281 "strip_size_kb": 64, 00:18:42.281 "state": "online", 00:18:42.281 "raid_level": "raid5f", 00:18:42.281 "superblock": true, 00:18:42.281 "num_base_bdevs": 3, 00:18:42.281 "num_base_bdevs_discovered": 3, 00:18:42.281 "num_base_bdevs_operational": 3, 00:18:42.281 "process": { 00:18:42.281 "type": "rebuild", 00:18:42.281 "target": "spare", 00:18:42.281 "progress": { 00:18:42.281 "blocks": 92160, 00:18:42.281 "percent": 72 00:18:42.281 } 00:18:42.281 }, 00:18:42.281 "base_bdevs_list": [ 00:18:42.281 { 00:18:42.281 "name": "spare", 00:18:42.281 "uuid": "faac8d8a-7cd5-50b2-99a7-93bcbde0940c", 00:18:42.281 "is_configured": true, 00:18:42.281 "data_offset": 2048, 00:18:42.281 "data_size": 63488 00:18:42.281 }, 00:18:42.281 { 00:18:42.281 "name": "BaseBdev2", 00:18:42.281 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:42.281 "is_configured": true, 00:18:42.281 "data_offset": 2048, 00:18:42.281 "data_size": 63488 00:18:42.281 }, 00:18:42.281 { 00:18:42.281 "name": "BaseBdev3", 00:18:42.281 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:42.281 "is_configured": true, 00:18:42.281 "data_offset": 2048, 00:18:42.281 "data_size": 63488 00:18:42.281 } 00:18:42.281 ] 00:18:42.281 }' 00:18:42.281 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.281 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.281 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.281 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.281 15:45:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.280 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.280 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.280 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.280 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.280 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.280 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.280 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.280 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.280 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.280 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.539 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.539 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.539 "name": "raid_bdev1", 00:18:43.539 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:43.539 "strip_size_kb": 64, 00:18:43.539 "state": "online", 00:18:43.539 "raid_level": "raid5f", 00:18:43.539 "superblock": true, 00:18:43.539 "num_base_bdevs": 3, 00:18:43.539 "num_base_bdevs_discovered": 3, 00:18:43.539 "num_base_bdevs_operational": 3, 00:18:43.539 "process": { 00:18:43.539 "type": "rebuild", 00:18:43.539 "target": "spare", 00:18:43.539 "progress": { 00:18:43.539 "blocks": 114688, 00:18:43.539 "percent": 90 00:18:43.539 } 00:18:43.539 }, 00:18:43.539 "base_bdevs_list": [ 00:18:43.539 { 00:18:43.539 "name": "spare", 00:18:43.539 "uuid": "faac8d8a-7cd5-50b2-99a7-93bcbde0940c", 00:18:43.539 "is_configured": true, 00:18:43.539 "data_offset": 2048, 00:18:43.539 "data_size": 63488 00:18:43.539 }, 00:18:43.539 { 00:18:43.539 "name": "BaseBdev2", 00:18:43.539 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:43.539 "is_configured": true, 00:18:43.539 "data_offset": 2048, 00:18:43.539 "data_size": 63488 00:18:43.539 }, 00:18:43.539 { 00:18:43.539 "name": "BaseBdev3", 00:18:43.539 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:43.539 "is_configured": true, 00:18:43.539 "data_offset": 2048, 00:18:43.539 "data_size": 63488 00:18:43.539 } 00:18:43.539 ] 00:18:43.539 }' 00:18:43.539 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.539 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.539 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.539 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.539 15:45:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:44.107 [2024-12-06 15:45:27.092456] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:44.107 [2024-12-06 15:45:27.092579] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:44.107 [2024-12-06 15:45:27.092727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.674 "name": "raid_bdev1", 00:18:44.674 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:44.674 "strip_size_kb": 64, 00:18:44.674 "state": "online", 00:18:44.674 "raid_level": "raid5f", 00:18:44.674 "superblock": true, 00:18:44.674 "num_base_bdevs": 3, 00:18:44.674 "num_base_bdevs_discovered": 3, 00:18:44.674 "num_base_bdevs_operational": 3, 00:18:44.674 "base_bdevs_list": [ 00:18:44.674 { 00:18:44.674 "name": "spare", 00:18:44.674 "uuid": "faac8d8a-7cd5-50b2-99a7-93bcbde0940c", 00:18:44.674 "is_configured": true, 00:18:44.674 "data_offset": 2048, 00:18:44.674 "data_size": 63488 00:18:44.674 }, 00:18:44.674 { 00:18:44.674 "name": "BaseBdev2", 00:18:44.674 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:44.674 "is_configured": true, 00:18:44.674 "data_offset": 2048, 00:18:44.674 "data_size": 63488 00:18:44.674 }, 00:18:44.674 { 00:18:44.674 "name": "BaseBdev3", 00:18:44.674 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:44.674 "is_configured": true, 00:18:44.674 "data_offset": 2048, 00:18:44.674 "data_size": 63488 00:18:44.674 } 00:18:44.674 ] 00:18:44.674 }' 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.674 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.675 "name": "raid_bdev1", 00:18:44.675 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:44.675 "strip_size_kb": 64, 00:18:44.675 "state": "online", 00:18:44.675 "raid_level": "raid5f", 00:18:44.675 "superblock": true, 00:18:44.675 "num_base_bdevs": 3, 00:18:44.675 "num_base_bdevs_discovered": 3, 00:18:44.675 "num_base_bdevs_operational": 3, 00:18:44.675 "base_bdevs_list": [ 00:18:44.675 { 00:18:44.675 "name": "spare", 00:18:44.675 "uuid": "faac8d8a-7cd5-50b2-99a7-93bcbde0940c", 00:18:44.675 "is_configured": true, 00:18:44.675 "data_offset": 2048, 00:18:44.675 "data_size": 63488 00:18:44.675 }, 00:18:44.675 { 00:18:44.675 "name": "BaseBdev2", 00:18:44.675 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:44.675 "is_configured": true, 00:18:44.675 "data_offset": 2048, 00:18:44.675 "data_size": 63488 00:18:44.675 }, 00:18:44.675 { 00:18:44.675 "name": "BaseBdev3", 00:18:44.675 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:44.675 "is_configured": true, 00:18:44.675 "data_offset": 2048, 00:18:44.675 "data_size": 63488 00:18:44.675 } 00:18:44.675 ] 00:18:44.675 }' 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.675 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.933 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.933 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.933 "name": "raid_bdev1", 00:18:44.933 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:44.933 "strip_size_kb": 64, 00:18:44.933 "state": "online", 00:18:44.933 "raid_level": "raid5f", 00:18:44.933 "superblock": true, 00:18:44.933 "num_base_bdevs": 3, 00:18:44.933 "num_base_bdevs_discovered": 3, 00:18:44.933 "num_base_bdevs_operational": 3, 00:18:44.933 "base_bdevs_list": [ 00:18:44.933 { 00:18:44.933 "name": "spare", 00:18:44.933 "uuid": "faac8d8a-7cd5-50b2-99a7-93bcbde0940c", 00:18:44.933 "is_configured": true, 00:18:44.933 "data_offset": 2048, 00:18:44.933 "data_size": 63488 00:18:44.933 }, 00:18:44.933 { 00:18:44.933 "name": "BaseBdev2", 00:18:44.933 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:44.934 "is_configured": true, 00:18:44.934 "data_offset": 2048, 00:18:44.934 "data_size": 63488 00:18:44.934 }, 00:18:44.934 { 00:18:44.934 "name": "BaseBdev3", 00:18:44.934 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:44.934 "is_configured": true, 00:18:44.934 "data_offset": 2048, 00:18:44.934 "data_size": 63488 00:18:44.934 } 00:18:44.934 ] 00:18:44.934 }' 00:18:44.934 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.934 15:45:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.193 [2024-12-06 15:45:28.380294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.193 [2024-12-06 15:45:28.380462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.193 [2024-12-06 15:45:28.380659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.193 [2024-12-06 15:45:28.380768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.193 [2024-12-06 15:45:28.380793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:45.193 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:45.451 /dev/nbd0 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:45.452 1+0 records in 00:18:45.452 1+0 records out 00:18:45.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345396 s, 11.9 MB/s 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:45.452 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:45.711 /dev/nbd1 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:45.711 1+0 records in 00:18:45.711 1+0 records out 00:18:45.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430209 s, 9.5 MB/s 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:45.711 15:45:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:45.970 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:45.970 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.970 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:45.970 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:45.970 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:45.970 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.970 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:46.229 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:46.229 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:46.229 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:46.229 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.229 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.229 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:46.229 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:46.229 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.229 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.229 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:46.487 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:46.487 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:46.487 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:46.487 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.487 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.487 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.488 [2024-12-06 15:45:29.610702] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:46.488 [2024-12-06 15:45:29.610787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.488 [2024-12-06 15:45:29.610815] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:46.488 [2024-12-06 15:45:29.610831] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.488 [2024-12-06 15:45:29.613975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.488 [2024-12-06 15:45:29.614023] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:46.488 [2024-12-06 15:45:29.614136] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:46.488 [2024-12-06 15:45:29.614197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.488 [2024-12-06 15:45:29.614379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.488 [2024-12-06 15:45:29.614537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:46.488 spare 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.488 [2024-12-06 15:45:29.714487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:46.488 [2024-12-06 15:45:29.714539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:46.488 [2024-12-06 15:45:29.714917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:18:46.488 [2024-12-06 15:45:29.721298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:46.488 [2024-12-06 15:45:29.721450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:46.488 [2024-12-06 15:45:29.721867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.488 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.747 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.747 "name": "raid_bdev1", 00:18:46.747 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:46.747 "strip_size_kb": 64, 00:18:46.747 "state": "online", 00:18:46.747 "raid_level": "raid5f", 00:18:46.747 "superblock": true, 00:18:46.747 "num_base_bdevs": 3, 00:18:46.747 "num_base_bdevs_discovered": 3, 00:18:46.747 "num_base_bdevs_operational": 3, 00:18:46.747 "base_bdevs_list": [ 00:18:46.747 { 00:18:46.747 "name": "spare", 00:18:46.747 "uuid": "faac8d8a-7cd5-50b2-99a7-93bcbde0940c", 00:18:46.747 "is_configured": true, 00:18:46.747 "data_offset": 2048, 00:18:46.747 "data_size": 63488 00:18:46.747 }, 00:18:46.747 { 00:18:46.747 "name": "BaseBdev2", 00:18:46.747 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:46.747 "is_configured": true, 00:18:46.747 "data_offset": 2048, 00:18:46.747 "data_size": 63488 00:18:46.747 }, 00:18:46.747 { 00:18:46.747 "name": "BaseBdev3", 00:18:46.747 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:46.747 "is_configured": true, 00:18:46.747 "data_offset": 2048, 00:18:46.747 "data_size": 63488 00:18:46.747 } 00:18:46.747 ] 00:18:46.747 }' 00:18:46.747 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.747 15:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.005 "name": "raid_bdev1", 00:18:47.005 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:47.005 "strip_size_kb": 64, 00:18:47.005 "state": "online", 00:18:47.005 "raid_level": "raid5f", 00:18:47.005 "superblock": true, 00:18:47.005 "num_base_bdevs": 3, 00:18:47.005 "num_base_bdevs_discovered": 3, 00:18:47.005 "num_base_bdevs_operational": 3, 00:18:47.005 "base_bdevs_list": [ 00:18:47.005 { 00:18:47.005 "name": "spare", 00:18:47.005 "uuid": "faac8d8a-7cd5-50b2-99a7-93bcbde0940c", 00:18:47.005 "is_configured": true, 00:18:47.005 "data_offset": 2048, 00:18:47.005 "data_size": 63488 00:18:47.005 }, 00:18:47.005 { 00:18:47.005 "name": "BaseBdev2", 00:18:47.005 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:47.005 "is_configured": true, 00:18:47.005 "data_offset": 2048, 00:18:47.005 "data_size": 63488 00:18:47.005 }, 00:18:47.005 { 00:18:47.005 "name": "BaseBdev3", 00:18:47.005 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:47.005 "is_configured": true, 00:18:47.005 "data_offset": 2048, 00:18:47.005 "data_size": 63488 00:18:47.005 } 00:18:47.005 ] 00:18:47.005 }' 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:47.005 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.264 [2024-12-06 15:45:30.344550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.264 "name": "raid_bdev1", 00:18:47.264 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:47.264 "strip_size_kb": 64, 00:18:47.264 "state": "online", 00:18:47.264 "raid_level": "raid5f", 00:18:47.264 "superblock": true, 00:18:47.264 "num_base_bdevs": 3, 00:18:47.264 "num_base_bdevs_discovered": 2, 00:18:47.264 "num_base_bdevs_operational": 2, 00:18:47.264 "base_bdevs_list": [ 00:18:47.264 { 00:18:47.264 "name": null, 00:18:47.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.264 "is_configured": false, 00:18:47.264 "data_offset": 0, 00:18:47.264 "data_size": 63488 00:18:47.264 }, 00:18:47.264 { 00:18:47.264 "name": "BaseBdev2", 00:18:47.264 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:47.264 "is_configured": true, 00:18:47.264 "data_offset": 2048, 00:18:47.264 "data_size": 63488 00:18:47.264 }, 00:18:47.264 { 00:18:47.264 "name": "BaseBdev3", 00:18:47.264 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:47.264 "is_configured": true, 00:18:47.264 "data_offset": 2048, 00:18:47.264 "data_size": 63488 00:18:47.264 } 00:18:47.264 ] 00:18:47.264 }' 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.264 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.522 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:47.522 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.522 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.522 [2024-12-06 15:45:30.728060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.522 [2024-12-06 15:45:30.728448] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:47.522 [2024-12-06 15:45:30.728642] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:47.522 [2024-12-06 15:45:30.728770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.522 [2024-12-06 15:45:30.746317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:18:47.522 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.522 15:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:47.522 [2024-12-06 15:45:30.755060] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.895 "name": "raid_bdev1", 00:18:48.895 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:48.895 "strip_size_kb": 64, 00:18:48.895 "state": "online", 00:18:48.895 "raid_level": "raid5f", 00:18:48.895 "superblock": true, 00:18:48.895 "num_base_bdevs": 3, 00:18:48.895 "num_base_bdevs_discovered": 3, 00:18:48.895 "num_base_bdevs_operational": 3, 00:18:48.895 "process": { 00:18:48.895 "type": "rebuild", 00:18:48.895 "target": "spare", 00:18:48.895 "progress": { 00:18:48.895 "blocks": 20480, 00:18:48.895 "percent": 16 00:18:48.895 } 00:18:48.895 }, 00:18:48.895 "base_bdevs_list": [ 00:18:48.895 { 00:18:48.895 "name": "spare", 00:18:48.895 "uuid": "faac8d8a-7cd5-50b2-99a7-93bcbde0940c", 00:18:48.895 "is_configured": true, 00:18:48.895 "data_offset": 2048, 00:18:48.895 "data_size": 63488 00:18:48.895 }, 00:18:48.895 { 00:18:48.895 "name": "BaseBdev2", 00:18:48.895 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:48.895 "is_configured": true, 00:18:48.895 "data_offset": 2048, 00:18:48.895 "data_size": 63488 00:18:48.895 }, 00:18:48.895 { 00:18:48.895 "name": "BaseBdev3", 00:18:48.895 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:48.895 "is_configured": true, 00:18:48.895 "data_offset": 2048, 00:18:48.895 "data_size": 63488 00:18:48.895 } 00:18:48.895 ] 00:18:48.895 }' 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.895 15:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.895 [2024-12-06 15:45:31.899290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:48.895 [2024-12-06 15:45:31.967391] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:48.895 [2024-12-06 15:45:31.967665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.895 [2024-12-06 15:45:31.967777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:48.895 [2024-12-06 15:45:31.967826] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.895 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.895 "name": "raid_bdev1", 00:18:48.895 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:48.895 "strip_size_kb": 64, 00:18:48.895 "state": "online", 00:18:48.895 "raid_level": "raid5f", 00:18:48.895 "superblock": true, 00:18:48.895 "num_base_bdevs": 3, 00:18:48.895 "num_base_bdevs_discovered": 2, 00:18:48.895 "num_base_bdevs_operational": 2, 00:18:48.895 "base_bdevs_list": [ 00:18:48.895 { 00:18:48.896 "name": null, 00:18:48.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.896 "is_configured": false, 00:18:48.896 "data_offset": 0, 00:18:48.896 "data_size": 63488 00:18:48.896 }, 00:18:48.896 { 00:18:48.896 "name": "BaseBdev2", 00:18:48.896 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:48.896 "is_configured": true, 00:18:48.896 "data_offset": 2048, 00:18:48.896 "data_size": 63488 00:18:48.896 }, 00:18:48.896 { 00:18:48.896 "name": "BaseBdev3", 00:18:48.896 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:48.896 "is_configured": true, 00:18:48.896 "data_offset": 2048, 00:18:48.896 "data_size": 63488 00:18:48.896 } 00:18:48.896 ] 00:18:48.896 }' 00:18:48.896 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.896 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.153 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:49.153 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.153 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.153 [2024-12-06 15:45:32.411768] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:49.153 [2024-12-06 15:45:32.411999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.153 [2024-12-06 15:45:32.412037] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:18:49.154 [2024-12-06 15:45:32.412060] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.154 [2024-12-06 15:45:32.412723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.154 [2024-12-06 15:45:32.412762] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:49.154 [2024-12-06 15:45:32.412888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:49.154 [2024-12-06 15:45:32.412915] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:49.154 [2024-12-06 15:45:32.412930] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:49.154 [2024-12-06 15:45:32.412960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.154 [2024-12-06 15:45:32.430558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:18:49.154 spare 00:18:49.154 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.154 15:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:49.154 [2024-12-06 15:45:32.438805] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.527 "name": "raid_bdev1", 00:18:50.527 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:50.527 "strip_size_kb": 64, 00:18:50.527 "state": "online", 00:18:50.527 "raid_level": "raid5f", 00:18:50.527 "superblock": true, 00:18:50.527 "num_base_bdevs": 3, 00:18:50.527 "num_base_bdevs_discovered": 3, 00:18:50.527 "num_base_bdevs_operational": 3, 00:18:50.527 "process": { 00:18:50.527 "type": "rebuild", 00:18:50.527 "target": "spare", 00:18:50.527 "progress": { 00:18:50.527 "blocks": 20480, 00:18:50.527 "percent": 16 00:18:50.527 } 00:18:50.527 }, 00:18:50.527 "base_bdevs_list": [ 00:18:50.527 { 00:18:50.527 "name": "spare", 00:18:50.527 "uuid": "faac8d8a-7cd5-50b2-99a7-93bcbde0940c", 00:18:50.527 "is_configured": true, 00:18:50.527 "data_offset": 2048, 00:18:50.527 "data_size": 63488 00:18:50.527 }, 00:18:50.527 { 00:18:50.527 "name": "BaseBdev2", 00:18:50.527 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:50.527 "is_configured": true, 00:18:50.527 "data_offset": 2048, 00:18:50.527 "data_size": 63488 00:18:50.527 }, 00:18:50.527 { 00:18:50.527 "name": "BaseBdev3", 00:18:50.527 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:50.527 "is_configured": true, 00:18:50.527 "data_offset": 2048, 00:18:50.527 "data_size": 63488 00:18:50.527 } 00:18:50.527 ] 00:18:50.527 }' 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.527 [2024-12-06 15:45:33.594788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.527 [2024-12-06 15:45:33.650974] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:50.527 [2024-12-06 15:45:33.651046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.527 [2024-12-06 15:45:33.651070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.527 [2024-12-06 15:45:33.651080] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.527 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.527 "name": "raid_bdev1", 00:18:50.527 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:50.527 "strip_size_kb": 64, 00:18:50.527 "state": "online", 00:18:50.527 "raid_level": "raid5f", 00:18:50.527 "superblock": true, 00:18:50.527 "num_base_bdevs": 3, 00:18:50.527 "num_base_bdevs_discovered": 2, 00:18:50.527 "num_base_bdevs_operational": 2, 00:18:50.527 "base_bdevs_list": [ 00:18:50.527 { 00:18:50.527 "name": null, 00:18:50.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.527 "is_configured": false, 00:18:50.527 "data_offset": 0, 00:18:50.527 "data_size": 63488 00:18:50.527 }, 00:18:50.527 { 00:18:50.527 "name": "BaseBdev2", 00:18:50.527 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:50.527 "is_configured": true, 00:18:50.527 "data_offset": 2048, 00:18:50.527 "data_size": 63488 00:18:50.527 }, 00:18:50.527 { 00:18:50.527 "name": "BaseBdev3", 00:18:50.527 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:50.527 "is_configured": true, 00:18:50.528 "data_offset": 2048, 00:18:50.528 "data_size": 63488 00:18:50.528 } 00:18:50.528 ] 00:18:50.528 }' 00:18:50.528 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.528 15:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.785 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.785 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.785 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:50.785 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:50.785 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.785 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.785 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.785 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.785 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.043 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.043 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.043 "name": "raid_bdev1", 00:18:51.043 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:51.044 "strip_size_kb": 64, 00:18:51.044 "state": "online", 00:18:51.044 "raid_level": "raid5f", 00:18:51.044 "superblock": true, 00:18:51.044 "num_base_bdevs": 3, 00:18:51.044 "num_base_bdevs_discovered": 2, 00:18:51.044 "num_base_bdevs_operational": 2, 00:18:51.044 "base_bdevs_list": [ 00:18:51.044 { 00:18:51.044 "name": null, 00:18:51.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.044 "is_configured": false, 00:18:51.044 "data_offset": 0, 00:18:51.044 "data_size": 63488 00:18:51.044 }, 00:18:51.044 { 00:18:51.044 "name": "BaseBdev2", 00:18:51.044 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:51.044 "is_configured": true, 00:18:51.044 "data_offset": 2048, 00:18:51.044 "data_size": 63488 00:18:51.044 }, 00:18:51.044 { 00:18:51.044 "name": "BaseBdev3", 00:18:51.044 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:51.044 "is_configured": true, 00:18:51.044 "data_offset": 2048, 00:18:51.044 "data_size": 63488 00:18:51.044 } 00:18:51.044 ] 00:18:51.044 }' 00:18:51.044 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.044 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.044 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.044 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.044 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:51.044 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.044 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.044 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.044 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:51.044 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.044 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.044 [2024-12-06 15:45:34.195078] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:51.044 [2024-12-06 15:45:34.195150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.044 [2024-12-06 15:45:34.195184] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:51.044 [2024-12-06 15:45:34.195198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.044 [2024-12-06 15:45:34.195807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.044 [2024-12-06 15:45:34.195876] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:51.044 [2024-12-06 15:45:34.195988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:51.044 [2024-12-06 15:45:34.196009] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:51.044 [2024-12-06 15:45:34.196034] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:51.044 [2024-12-06 15:45:34.196049] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:51.044 BaseBdev1 00:18:51.044 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.044 15:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.978 "name": "raid_bdev1", 00:18:51.978 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:51.978 "strip_size_kb": 64, 00:18:51.978 "state": "online", 00:18:51.978 "raid_level": "raid5f", 00:18:51.978 "superblock": true, 00:18:51.978 "num_base_bdevs": 3, 00:18:51.978 "num_base_bdevs_discovered": 2, 00:18:51.978 "num_base_bdevs_operational": 2, 00:18:51.978 "base_bdevs_list": [ 00:18:51.978 { 00:18:51.978 "name": null, 00:18:51.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.978 "is_configured": false, 00:18:51.978 "data_offset": 0, 00:18:51.978 "data_size": 63488 00:18:51.978 }, 00:18:51.978 { 00:18:51.978 "name": "BaseBdev2", 00:18:51.978 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:51.978 "is_configured": true, 00:18:51.978 "data_offset": 2048, 00:18:51.978 "data_size": 63488 00:18:51.978 }, 00:18:51.978 { 00:18:51.978 "name": "BaseBdev3", 00:18:51.978 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:51.978 "is_configured": true, 00:18:51.978 "data_offset": 2048, 00:18:51.978 "data_size": 63488 00:18:51.978 } 00:18:51.978 ] 00:18:51.978 }' 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.978 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.547 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.547 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.547 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.547 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.547 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.547 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.547 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.547 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.547 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.547 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.547 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.547 "name": "raid_bdev1", 00:18:52.547 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:52.547 "strip_size_kb": 64, 00:18:52.547 "state": "online", 00:18:52.547 "raid_level": "raid5f", 00:18:52.547 "superblock": true, 00:18:52.547 "num_base_bdevs": 3, 00:18:52.547 "num_base_bdevs_discovered": 2, 00:18:52.547 "num_base_bdevs_operational": 2, 00:18:52.547 "base_bdevs_list": [ 00:18:52.547 { 00:18:52.547 "name": null, 00:18:52.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.547 "is_configured": false, 00:18:52.547 "data_offset": 0, 00:18:52.547 "data_size": 63488 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "name": "BaseBdev2", 00:18:52.547 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:52.547 "is_configured": true, 00:18:52.547 "data_offset": 2048, 00:18:52.547 "data_size": 63488 00:18:52.548 }, 00:18:52.548 { 00:18:52.548 "name": "BaseBdev3", 00:18:52.548 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:52.548 "is_configured": true, 00:18:52.548 "data_offset": 2048, 00:18:52.548 "data_size": 63488 00:18:52.548 } 00:18:52.548 ] 00:18:52.548 }' 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.548 [2024-12-06 15:45:35.710165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.548 [2024-12-06 15:45:35.710389] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:52.548 [2024-12-06 15:45:35.710409] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:52.548 request: 00:18:52.548 { 00:18:52.548 "base_bdev": "BaseBdev1", 00:18:52.548 "raid_bdev": "raid_bdev1", 00:18:52.548 "method": "bdev_raid_add_base_bdev", 00:18:52.548 "req_id": 1 00:18:52.548 } 00:18:52.548 Got JSON-RPC error response 00:18:52.548 response: 00:18:52.548 { 00:18:52.548 "code": -22, 00:18:52.548 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:52.548 } 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:52.548 15:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.496 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.496 "name": "raid_bdev1", 00:18:53.496 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:53.496 "strip_size_kb": 64, 00:18:53.496 "state": "online", 00:18:53.496 "raid_level": "raid5f", 00:18:53.496 "superblock": true, 00:18:53.496 "num_base_bdevs": 3, 00:18:53.496 "num_base_bdevs_discovered": 2, 00:18:53.496 "num_base_bdevs_operational": 2, 00:18:53.496 "base_bdevs_list": [ 00:18:53.496 { 00:18:53.496 "name": null, 00:18:53.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.496 "is_configured": false, 00:18:53.496 "data_offset": 0, 00:18:53.496 "data_size": 63488 00:18:53.496 }, 00:18:53.496 { 00:18:53.496 "name": "BaseBdev2", 00:18:53.496 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:53.496 "is_configured": true, 00:18:53.496 "data_offset": 2048, 00:18:53.496 "data_size": 63488 00:18:53.497 }, 00:18:53.497 { 00:18:53.497 "name": "BaseBdev3", 00:18:53.497 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:53.497 "is_configured": true, 00:18:53.497 "data_offset": 2048, 00:18:53.497 "data_size": 63488 00:18:53.497 } 00:18:53.497 ] 00:18:53.497 }' 00:18:53.497 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.497 15:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.063 "name": "raid_bdev1", 00:18:54.063 "uuid": "f5e32b00-7073-47b8-842e-368e5cacd5cc", 00:18:54.063 "strip_size_kb": 64, 00:18:54.063 "state": "online", 00:18:54.063 "raid_level": "raid5f", 00:18:54.063 "superblock": true, 00:18:54.063 "num_base_bdevs": 3, 00:18:54.063 "num_base_bdevs_discovered": 2, 00:18:54.063 "num_base_bdevs_operational": 2, 00:18:54.063 "base_bdevs_list": [ 00:18:54.063 { 00:18:54.063 "name": null, 00:18:54.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.063 "is_configured": false, 00:18:54.063 "data_offset": 0, 00:18:54.063 "data_size": 63488 00:18:54.063 }, 00:18:54.063 { 00:18:54.063 "name": "BaseBdev2", 00:18:54.063 "uuid": "12f8b234-4945-5c72-b168-e85b81c7271e", 00:18:54.063 "is_configured": true, 00:18:54.063 "data_offset": 2048, 00:18:54.063 "data_size": 63488 00:18:54.063 }, 00:18:54.063 { 00:18:54.063 "name": "BaseBdev3", 00:18:54.063 "uuid": "ff887983-0fdb-52d8-8e56-14774cd10aa8", 00:18:54.063 "is_configured": true, 00:18:54.063 "data_offset": 2048, 00:18:54.063 "data_size": 63488 00:18:54.063 } 00:18:54.063 ] 00:18:54.063 }' 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82038 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82038 ']' 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82038 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.063 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82038 00:18:54.063 killing process with pid 82038 00:18:54.063 Received shutdown signal, test time was about 60.000000 seconds 00:18:54.063 00:18:54.063 Latency(us) 00:18:54.063 [2024-12-06T15:45:37.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.063 [2024-12-06T15:45:37.359Z] =================================================================================================================== 00:18:54.064 [2024-12-06T15:45:37.359Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:54.064 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:54.064 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:54.064 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82038' 00:18:54.064 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82038 00:18:54.064 [2024-12-06 15:45:37.258350] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:54.064 15:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82038 00:18:54.064 [2024-12-06 15:45:37.258524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.064 [2024-12-06 15:45:37.258601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:54.064 [2024-12-06 15:45:37.258619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:54.630 [2024-12-06 15:45:37.695436] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:56.003 15:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:56.003 00:18:56.003 real 0m23.047s 00:18:56.003 user 0m28.854s 00:18:56.003 sys 0m3.259s 00:18:56.003 15:45:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.003 ************************************ 00:18:56.003 END TEST raid5f_rebuild_test_sb 00:18:56.003 ************************************ 00:18:56.003 15:45:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.003 15:45:38 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:56.003 15:45:38 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:18:56.003 15:45:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:56.003 15:45:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.003 15:45:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:56.003 ************************************ 00:18:56.003 START TEST raid5f_state_function_test 00:18:56.003 ************************************ 00:18:56.003 15:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:18:56.003 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:56.003 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:56.003 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:56.003 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:56.003 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:56.003 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.003 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:56.003 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:56.003 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.003 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82789 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82789' 00:18:56.004 Process raid pid: 82789 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82789 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82789 ']' 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.004 15:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.004 [2024-12-06 15:45:39.115144] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:18:56.004 [2024-12-06 15:45:39.115292] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.263 [2024-12-06 15:45:39.298693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.263 [2024-12-06 15:45:39.433252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.521 [2024-12-06 15:45:39.670229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:56.521 [2024-12-06 15:45:39.670536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.780 [2024-12-06 15:45:39.954846] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:56.780 [2024-12-06 15:45:39.954917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:56.780 [2024-12-06 15:45:39.954930] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:56.780 [2024-12-06 15:45:39.954944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:56.780 [2024-12-06 15:45:39.954952] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:56.780 [2024-12-06 15:45:39.954964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:56.780 [2024-12-06 15:45:39.954972] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:56.780 [2024-12-06 15:45:39.954985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.780 15:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.780 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.780 "name": "Existed_Raid", 00:18:56.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.780 "strip_size_kb": 64, 00:18:56.780 "state": "configuring", 00:18:56.780 "raid_level": "raid5f", 00:18:56.780 "superblock": false, 00:18:56.780 "num_base_bdevs": 4, 00:18:56.780 "num_base_bdevs_discovered": 0, 00:18:56.780 "num_base_bdevs_operational": 4, 00:18:56.780 "base_bdevs_list": [ 00:18:56.780 { 00:18:56.780 "name": "BaseBdev1", 00:18:56.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.780 "is_configured": false, 00:18:56.780 "data_offset": 0, 00:18:56.780 "data_size": 0 00:18:56.780 }, 00:18:56.780 { 00:18:56.780 "name": "BaseBdev2", 00:18:56.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.780 "is_configured": false, 00:18:56.780 "data_offset": 0, 00:18:56.780 "data_size": 0 00:18:56.780 }, 00:18:56.780 { 00:18:56.780 "name": "BaseBdev3", 00:18:56.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.780 "is_configured": false, 00:18:56.780 "data_offset": 0, 00:18:56.780 "data_size": 0 00:18:56.780 }, 00:18:56.780 { 00:18:56.780 "name": "BaseBdev4", 00:18:56.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.780 "is_configured": false, 00:18:56.780 "data_offset": 0, 00:18:56.780 "data_size": 0 00:18:56.780 } 00:18:56.780 ] 00:18:56.780 }' 00:18:56.780 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.780 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.038 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:57.038 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.038 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.038 [2024-12-06 15:45:40.326296] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:57.038 [2024-12-06 15:45:40.326345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.297 [2024-12-06 15:45:40.338281] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:57.297 [2024-12-06 15:45:40.338334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:57.297 [2024-12-06 15:45:40.338345] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.297 [2024-12-06 15:45:40.338358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.297 [2024-12-06 15:45:40.338366] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:57.297 [2024-12-06 15:45:40.338378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:57.297 [2024-12-06 15:45:40.338386] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:57.297 [2024-12-06 15:45:40.338399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.297 [2024-12-06 15:45:40.391936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:57.297 BaseBdev1 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.297 [ 00:18:57.297 { 00:18:57.297 "name": "BaseBdev1", 00:18:57.297 "aliases": [ 00:18:57.297 "cb5db8ff-b1c2-4937-8a5b-8c477ba7f28b" 00:18:57.297 ], 00:18:57.297 "product_name": "Malloc disk", 00:18:57.297 "block_size": 512, 00:18:57.297 "num_blocks": 65536, 00:18:57.297 "uuid": "cb5db8ff-b1c2-4937-8a5b-8c477ba7f28b", 00:18:57.297 "assigned_rate_limits": { 00:18:57.297 "rw_ios_per_sec": 0, 00:18:57.297 "rw_mbytes_per_sec": 0, 00:18:57.297 "r_mbytes_per_sec": 0, 00:18:57.297 "w_mbytes_per_sec": 0 00:18:57.297 }, 00:18:57.297 "claimed": true, 00:18:57.297 "claim_type": "exclusive_write", 00:18:57.297 "zoned": false, 00:18:57.297 "supported_io_types": { 00:18:57.297 "read": true, 00:18:57.297 "write": true, 00:18:57.297 "unmap": true, 00:18:57.297 "flush": true, 00:18:57.297 "reset": true, 00:18:57.297 "nvme_admin": false, 00:18:57.297 "nvme_io": false, 00:18:57.297 "nvme_io_md": false, 00:18:57.297 "write_zeroes": true, 00:18:57.297 "zcopy": true, 00:18:57.297 "get_zone_info": false, 00:18:57.297 "zone_management": false, 00:18:57.297 "zone_append": false, 00:18:57.297 "compare": false, 00:18:57.297 "compare_and_write": false, 00:18:57.297 "abort": true, 00:18:57.297 "seek_hole": false, 00:18:57.297 "seek_data": false, 00:18:57.297 "copy": true, 00:18:57.297 "nvme_iov_md": false 00:18:57.297 }, 00:18:57.297 "memory_domains": [ 00:18:57.297 { 00:18:57.297 "dma_device_id": "system", 00:18:57.297 "dma_device_type": 1 00:18:57.297 }, 00:18:57.297 { 00:18:57.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.297 "dma_device_type": 2 00:18:57.297 } 00:18:57.297 ], 00:18:57.297 "driver_specific": {} 00:18:57.297 } 00:18:57.297 ] 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.297 "name": "Existed_Raid", 00:18:57.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.297 "strip_size_kb": 64, 00:18:57.297 "state": "configuring", 00:18:57.297 "raid_level": "raid5f", 00:18:57.297 "superblock": false, 00:18:57.297 "num_base_bdevs": 4, 00:18:57.297 "num_base_bdevs_discovered": 1, 00:18:57.297 "num_base_bdevs_operational": 4, 00:18:57.297 "base_bdevs_list": [ 00:18:57.297 { 00:18:57.297 "name": "BaseBdev1", 00:18:57.297 "uuid": "cb5db8ff-b1c2-4937-8a5b-8c477ba7f28b", 00:18:57.297 "is_configured": true, 00:18:57.297 "data_offset": 0, 00:18:57.297 "data_size": 65536 00:18:57.297 }, 00:18:57.297 { 00:18:57.297 "name": "BaseBdev2", 00:18:57.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.297 "is_configured": false, 00:18:57.297 "data_offset": 0, 00:18:57.297 "data_size": 0 00:18:57.297 }, 00:18:57.297 { 00:18:57.297 "name": "BaseBdev3", 00:18:57.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.297 "is_configured": false, 00:18:57.297 "data_offset": 0, 00:18:57.297 "data_size": 0 00:18:57.297 }, 00:18:57.297 { 00:18:57.297 "name": "BaseBdev4", 00:18:57.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.297 "is_configured": false, 00:18:57.297 "data_offset": 0, 00:18:57.297 "data_size": 0 00:18:57.297 } 00:18:57.297 ] 00:18:57.297 }' 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.297 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.864 [2024-12-06 15:45:40.859337] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:57.864 [2024-12-06 15:45:40.859407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.864 [2024-12-06 15:45:40.871388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:57.864 [2024-12-06 15:45:40.873978] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.864 [2024-12-06 15:45:40.874138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.864 [2024-12-06 15:45:40.874160] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:57.864 [2024-12-06 15:45:40.874177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:57.864 [2024-12-06 15:45:40.874186] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:57.864 [2024-12-06 15:45:40.874198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.864 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.865 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.865 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.865 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.865 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.865 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.865 "name": "Existed_Raid", 00:18:57.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.865 "strip_size_kb": 64, 00:18:57.865 "state": "configuring", 00:18:57.865 "raid_level": "raid5f", 00:18:57.865 "superblock": false, 00:18:57.865 "num_base_bdevs": 4, 00:18:57.865 "num_base_bdevs_discovered": 1, 00:18:57.865 "num_base_bdevs_operational": 4, 00:18:57.865 "base_bdevs_list": [ 00:18:57.865 { 00:18:57.865 "name": "BaseBdev1", 00:18:57.865 "uuid": "cb5db8ff-b1c2-4937-8a5b-8c477ba7f28b", 00:18:57.865 "is_configured": true, 00:18:57.865 "data_offset": 0, 00:18:57.865 "data_size": 65536 00:18:57.865 }, 00:18:57.865 { 00:18:57.865 "name": "BaseBdev2", 00:18:57.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.865 "is_configured": false, 00:18:57.865 "data_offset": 0, 00:18:57.865 "data_size": 0 00:18:57.865 }, 00:18:57.865 { 00:18:57.865 "name": "BaseBdev3", 00:18:57.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.865 "is_configured": false, 00:18:57.865 "data_offset": 0, 00:18:57.865 "data_size": 0 00:18:57.865 }, 00:18:57.865 { 00:18:57.865 "name": "BaseBdev4", 00:18:57.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.865 "is_configured": false, 00:18:57.865 "data_offset": 0, 00:18:57.865 "data_size": 0 00:18:57.865 } 00:18:57.865 ] 00:18:57.865 }' 00:18:57.865 15:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.865 15:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.123 [2024-12-06 15:45:41.322173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:58.123 BaseBdev2 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.123 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.123 [ 00:18:58.123 { 00:18:58.123 "name": "BaseBdev2", 00:18:58.123 "aliases": [ 00:18:58.123 "7a961173-7374-43e3-8d8f-9b17b4bb6ce1" 00:18:58.123 ], 00:18:58.123 "product_name": "Malloc disk", 00:18:58.123 "block_size": 512, 00:18:58.123 "num_blocks": 65536, 00:18:58.123 "uuid": "7a961173-7374-43e3-8d8f-9b17b4bb6ce1", 00:18:58.123 "assigned_rate_limits": { 00:18:58.123 "rw_ios_per_sec": 0, 00:18:58.123 "rw_mbytes_per_sec": 0, 00:18:58.123 "r_mbytes_per_sec": 0, 00:18:58.123 "w_mbytes_per_sec": 0 00:18:58.123 }, 00:18:58.123 "claimed": true, 00:18:58.123 "claim_type": "exclusive_write", 00:18:58.123 "zoned": false, 00:18:58.123 "supported_io_types": { 00:18:58.123 "read": true, 00:18:58.123 "write": true, 00:18:58.123 "unmap": true, 00:18:58.123 "flush": true, 00:18:58.123 "reset": true, 00:18:58.123 "nvme_admin": false, 00:18:58.123 "nvme_io": false, 00:18:58.123 "nvme_io_md": false, 00:18:58.123 "write_zeroes": true, 00:18:58.123 "zcopy": true, 00:18:58.123 "get_zone_info": false, 00:18:58.123 "zone_management": false, 00:18:58.123 "zone_append": false, 00:18:58.123 "compare": false, 00:18:58.123 "compare_and_write": false, 00:18:58.123 "abort": true, 00:18:58.123 "seek_hole": false, 00:18:58.123 "seek_data": false, 00:18:58.123 "copy": true, 00:18:58.123 "nvme_iov_md": false 00:18:58.123 }, 00:18:58.123 "memory_domains": [ 00:18:58.123 { 00:18:58.123 "dma_device_id": "system", 00:18:58.123 "dma_device_type": 1 00:18:58.123 }, 00:18:58.123 { 00:18:58.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.123 "dma_device_type": 2 00:18:58.123 } 00:18:58.123 ], 00:18:58.123 "driver_specific": {} 00:18:58.123 } 00:18:58.123 ] 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.124 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.420 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.420 "name": "Existed_Raid", 00:18:58.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.420 "strip_size_kb": 64, 00:18:58.420 "state": "configuring", 00:18:58.420 "raid_level": "raid5f", 00:18:58.420 "superblock": false, 00:18:58.420 "num_base_bdevs": 4, 00:18:58.420 "num_base_bdevs_discovered": 2, 00:18:58.420 "num_base_bdevs_operational": 4, 00:18:58.420 "base_bdevs_list": [ 00:18:58.420 { 00:18:58.420 "name": "BaseBdev1", 00:18:58.420 "uuid": "cb5db8ff-b1c2-4937-8a5b-8c477ba7f28b", 00:18:58.420 "is_configured": true, 00:18:58.420 "data_offset": 0, 00:18:58.420 "data_size": 65536 00:18:58.420 }, 00:18:58.420 { 00:18:58.420 "name": "BaseBdev2", 00:18:58.420 "uuid": "7a961173-7374-43e3-8d8f-9b17b4bb6ce1", 00:18:58.420 "is_configured": true, 00:18:58.420 "data_offset": 0, 00:18:58.420 "data_size": 65536 00:18:58.420 }, 00:18:58.420 { 00:18:58.420 "name": "BaseBdev3", 00:18:58.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.420 "is_configured": false, 00:18:58.420 "data_offset": 0, 00:18:58.420 "data_size": 0 00:18:58.420 }, 00:18:58.420 { 00:18:58.420 "name": "BaseBdev4", 00:18:58.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.420 "is_configured": false, 00:18:58.420 "data_offset": 0, 00:18:58.420 "data_size": 0 00:18:58.420 } 00:18:58.420 ] 00:18:58.420 }' 00:18:58.420 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.420 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.679 [2024-12-06 15:45:41.839577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:58.679 BaseBdev3 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.679 [ 00:18:58.679 { 00:18:58.679 "name": "BaseBdev3", 00:18:58.679 "aliases": [ 00:18:58.679 "b5b7f230-b162-4db1-84c9-98cbd687653a" 00:18:58.679 ], 00:18:58.679 "product_name": "Malloc disk", 00:18:58.679 "block_size": 512, 00:18:58.679 "num_blocks": 65536, 00:18:58.679 "uuid": "b5b7f230-b162-4db1-84c9-98cbd687653a", 00:18:58.679 "assigned_rate_limits": { 00:18:58.679 "rw_ios_per_sec": 0, 00:18:58.679 "rw_mbytes_per_sec": 0, 00:18:58.679 "r_mbytes_per_sec": 0, 00:18:58.679 "w_mbytes_per_sec": 0 00:18:58.679 }, 00:18:58.679 "claimed": true, 00:18:58.679 "claim_type": "exclusive_write", 00:18:58.679 "zoned": false, 00:18:58.679 "supported_io_types": { 00:18:58.679 "read": true, 00:18:58.679 "write": true, 00:18:58.679 "unmap": true, 00:18:58.679 "flush": true, 00:18:58.679 "reset": true, 00:18:58.679 "nvme_admin": false, 00:18:58.679 "nvme_io": false, 00:18:58.679 "nvme_io_md": false, 00:18:58.679 "write_zeroes": true, 00:18:58.679 "zcopy": true, 00:18:58.679 "get_zone_info": false, 00:18:58.679 "zone_management": false, 00:18:58.679 "zone_append": false, 00:18:58.679 "compare": false, 00:18:58.679 "compare_and_write": false, 00:18:58.679 "abort": true, 00:18:58.679 "seek_hole": false, 00:18:58.679 "seek_data": false, 00:18:58.679 "copy": true, 00:18:58.679 "nvme_iov_md": false 00:18:58.679 }, 00:18:58.679 "memory_domains": [ 00:18:58.679 { 00:18:58.679 "dma_device_id": "system", 00:18:58.679 "dma_device_type": 1 00:18:58.679 }, 00:18:58.679 { 00:18:58.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.679 "dma_device_type": 2 00:18:58.679 } 00:18:58.679 ], 00:18:58.679 "driver_specific": {} 00:18:58.679 } 00:18:58.679 ] 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.679 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.680 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.680 "name": "Existed_Raid", 00:18:58.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.680 "strip_size_kb": 64, 00:18:58.680 "state": "configuring", 00:18:58.680 "raid_level": "raid5f", 00:18:58.680 "superblock": false, 00:18:58.680 "num_base_bdevs": 4, 00:18:58.680 "num_base_bdevs_discovered": 3, 00:18:58.680 "num_base_bdevs_operational": 4, 00:18:58.680 "base_bdevs_list": [ 00:18:58.680 { 00:18:58.680 "name": "BaseBdev1", 00:18:58.680 "uuid": "cb5db8ff-b1c2-4937-8a5b-8c477ba7f28b", 00:18:58.680 "is_configured": true, 00:18:58.680 "data_offset": 0, 00:18:58.680 "data_size": 65536 00:18:58.680 }, 00:18:58.680 { 00:18:58.680 "name": "BaseBdev2", 00:18:58.680 "uuid": "7a961173-7374-43e3-8d8f-9b17b4bb6ce1", 00:18:58.680 "is_configured": true, 00:18:58.680 "data_offset": 0, 00:18:58.680 "data_size": 65536 00:18:58.680 }, 00:18:58.680 { 00:18:58.680 "name": "BaseBdev3", 00:18:58.680 "uuid": "b5b7f230-b162-4db1-84c9-98cbd687653a", 00:18:58.680 "is_configured": true, 00:18:58.680 "data_offset": 0, 00:18:58.680 "data_size": 65536 00:18:58.680 }, 00:18:58.680 { 00:18:58.680 "name": "BaseBdev4", 00:18:58.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.680 "is_configured": false, 00:18:58.680 "data_offset": 0, 00:18:58.680 "data_size": 0 00:18:58.680 } 00:18:58.680 ] 00:18:58.680 }' 00:18:58.680 15:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.680 15:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.247 [2024-12-06 15:45:42.346435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:59.247 [2024-12-06 15:45:42.346554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:59.247 [2024-12-06 15:45:42.346568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:59.247 [2024-12-06 15:45:42.346898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:59.247 [2024-12-06 15:45:42.354520] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:59.247 [2024-12-06 15:45:42.354551] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:59.247 [2024-12-06 15:45:42.354883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.247 BaseBdev4 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.247 [ 00:18:59.247 { 00:18:59.247 "name": "BaseBdev4", 00:18:59.247 "aliases": [ 00:18:59.247 "92629bbb-124c-4be8-8ac6-d09ad066cef4" 00:18:59.247 ], 00:18:59.247 "product_name": "Malloc disk", 00:18:59.247 "block_size": 512, 00:18:59.247 "num_blocks": 65536, 00:18:59.247 "uuid": "92629bbb-124c-4be8-8ac6-d09ad066cef4", 00:18:59.247 "assigned_rate_limits": { 00:18:59.247 "rw_ios_per_sec": 0, 00:18:59.247 "rw_mbytes_per_sec": 0, 00:18:59.247 "r_mbytes_per_sec": 0, 00:18:59.247 "w_mbytes_per_sec": 0 00:18:59.247 }, 00:18:59.247 "claimed": true, 00:18:59.247 "claim_type": "exclusive_write", 00:18:59.247 "zoned": false, 00:18:59.247 "supported_io_types": { 00:18:59.247 "read": true, 00:18:59.247 "write": true, 00:18:59.247 "unmap": true, 00:18:59.247 "flush": true, 00:18:59.247 "reset": true, 00:18:59.247 "nvme_admin": false, 00:18:59.247 "nvme_io": false, 00:18:59.247 "nvme_io_md": false, 00:18:59.247 "write_zeroes": true, 00:18:59.247 "zcopy": true, 00:18:59.247 "get_zone_info": false, 00:18:59.247 "zone_management": false, 00:18:59.247 "zone_append": false, 00:18:59.247 "compare": false, 00:18:59.247 "compare_and_write": false, 00:18:59.247 "abort": true, 00:18:59.247 "seek_hole": false, 00:18:59.247 "seek_data": false, 00:18:59.247 "copy": true, 00:18:59.247 "nvme_iov_md": false 00:18:59.247 }, 00:18:59.247 "memory_domains": [ 00:18:59.247 { 00:18:59.247 "dma_device_id": "system", 00:18:59.247 "dma_device_type": 1 00:18:59.247 }, 00:18:59.247 { 00:18:59.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.247 "dma_device_type": 2 00:18:59.247 } 00:18:59.247 ], 00:18:59.247 "driver_specific": {} 00:18:59.247 } 00:18:59.247 ] 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:59.247 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.248 "name": "Existed_Raid", 00:18:59.248 "uuid": "0a6ecccc-faee-48e8-a14b-c88e1129d05c", 00:18:59.248 "strip_size_kb": 64, 00:18:59.248 "state": "online", 00:18:59.248 "raid_level": "raid5f", 00:18:59.248 "superblock": false, 00:18:59.248 "num_base_bdevs": 4, 00:18:59.248 "num_base_bdevs_discovered": 4, 00:18:59.248 "num_base_bdevs_operational": 4, 00:18:59.248 "base_bdevs_list": [ 00:18:59.248 { 00:18:59.248 "name": "BaseBdev1", 00:18:59.248 "uuid": "cb5db8ff-b1c2-4937-8a5b-8c477ba7f28b", 00:18:59.248 "is_configured": true, 00:18:59.248 "data_offset": 0, 00:18:59.248 "data_size": 65536 00:18:59.248 }, 00:18:59.248 { 00:18:59.248 "name": "BaseBdev2", 00:18:59.248 "uuid": "7a961173-7374-43e3-8d8f-9b17b4bb6ce1", 00:18:59.248 "is_configured": true, 00:18:59.248 "data_offset": 0, 00:18:59.248 "data_size": 65536 00:18:59.248 }, 00:18:59.248 { 00:18:59.248 "name": "BaseBdev3", 00:18:59.248 "uuid": "b5b7f230-b162-4db1-84c9-98cbd687653a", 00:18:59.248 "is_configured": true, 00:18:59.248 "data_offset": 0, 00:18:59.248 "data_size": 65536 00:18:59.248 }, 00:18:59.248 { 00:18:59.248 "name": "BaseBdev4", 00:18:59.248 "uuid": "92629bbb-124c-4be8-8ac6-d09ad066cef4", 00:18:59.248 "is_configured": true, 00:18:59.248 "data_offset": 0, 00:18:59.248 "data_size": 65536 00:18:59.248 } 00:18:59.248 ] 00:18:59.248 }' 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.248 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.507 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:59.507 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:59.507 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:59.507 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:59.507 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:59.507 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:59.507 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:59.507 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:59.507 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.507 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.507 [2024-12-06 15:45:42.799661] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:59.766 "name": "Existed_Raid", 00:18:59.766 "aliases": [ 00:18:59.766 "0a6ecccc-faee-48e8-a14b-c88e1129d05c" 00:18:59.766 ], 00:18:59.766 "product_name": "Raid Volume", 00:18:59.766 "block_size": 512, 00:18:59.766 "num_blocks": 196608, 00:18:59.766 "uuid": "0a6ecccc-faee-48e8-a14b-c88e1129d05c", 00:18:59.766 "assigned_rate_limits": { 00:18:59.766 "rw_ios_per_sec": 0, 00:18:59.766 "rw_mbytes_per_sec": 0, 00:18:59.766 "r_mbytes_per_sec": 0, 00:18:59.766 "w_mbytes_per_sec": 0 00:18:59.766 }, 00:18:59.766 "claimed": false, 00:18:59.766 "zoned": false, 00:18:59.766 "supported_io_types": { 00:18:59.766 "read": true, 00:18:59.766 "write": true, 00:18:59.766 "unmap": false, 00:18:59.766 "flush": false, 00:18:59.766 "reset": true, 00:18:59.766 "nvme_admin": false, 00:18:59.766 "nvme_io": false, 00:18:59.766 "nvme_io_md": false, 00:18:59.766 "write_zeroes": true, 00:18:59.766 "zcopy": false, 00:18:59.766 "get_zone_info": false, 00:18:59.766 "zone_management": false, 00:18:59.766 "zone_append": false, 00:18:59.766 "compare": false, 00:18:59.766 "compare_and_write": false, 00:18:59.766 "abort": false, 00:18:59.766 "seek_hole": false, 00:18:59.766 "seek_data": false, 00:18:59.766 "copy": false, 00:18:59.766 "nvme_iov_md": false 00:18:59.766 }, 00:18:59.766 "driver_specific": { 00:18:59.766 "raid": { 00:18:59.766 "uuid": "0a6ecccc-faee-48e8-a14b-c88e1129d05c", 00:18:59.766 "strip_size_kb": 64, 00:18:59.766 "state": "online", 00:18:59.766 "raid_level": "raid5f", 00:18:59.766 "superblock": false, 00:18:59.766 "num_base_bdevs": 4, 00:18:59.766 "num_base_bdevs_discovered": 4, 00:18:59.766 "num_base_bdevs_operational": 4, 00:18:59.766 "base_bdevs_list": [ 00:18:59.766 { 00:18:59.766 "name": "BaseBdev1", 00:18:59.766 "uuid": "cb5db8ff-b1c2-4937-8a5b-8c477ba7f28b", 00:18:59.766 "is_configured": true, 00:18:59.766 "data_offset": 0, 00:18:59.766 "data_size": 65536 00:18:59.766 }, 00:18:59.766 { 00:18:59.766 "name": "BaseBdev2", 00:18:59.766 "uuid": "7a961173-7374-43e3-8d8f-9b17b4bb6ce1", 00:18:59.766 "is_configured": true, 00:18:59.766 "data_offset": 0, 00:18:59.766 "data_size": 65536 00:18:59.766 }, 00:18:59.766 { 00:18:59.766 "name": "BaseBdev3", 00:18:59.766 "uuid": "b5b7f230-b162-4db1-84c9-98cbd687653a", 00:18:59.766 "is_configured": true, 00:18:59.766 "data_offset": 0, 00:18:59.766 "data_size": 65536 00:18:59.766 }, 00:18:59.766 { 00:18:59.766 "name": "BaseBdev4", 00:18:59.766 "uuid": "92629bbb-124c-4be8-8ac6-d09ad066cef4", 00:18:59.766 "is_configured": true, 00:18:59.766 "data_offset": 0, 00:18:59.766 "data_size": 65536 00:18:59.766 } 00:18:59.766 ] 00:18:59.766 } 00:18:59.766 } 00:18:59.766 }' 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:59.766 BaseBdev2 00:18:59.766 BaseBdev3 00:18:59.766 BaseBdev4' 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.766 15:45:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.766 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:59.766 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:59.766 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.766 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.766 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:59.766 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.766 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.766 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.766 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:59.766 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:59.766 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.025 [2024-12-06 15:45:43.098999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.025 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.026 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:00.026 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.026 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:00.026 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.026 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.026 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.026 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.026 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.026 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.026 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.026 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.026 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.026 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.026 "name": "Existed_Raid", 00:19:00.026 "uuid": "0a6ecccc-faee-48e8-a14b-c88e1129d05c", 00:19:00.026 "strip_size_kb": 64, 00:19:00.026 "state": "online", 00:19:00.026 "raid_level": "raid5f", 00:19:00.026 "superblock": false, 00:19:00.026 "num_base_bdevs": 4, 00:19:00.026 "num_base_bdevs_discovered": 3, 00:19:00.026 "num_base_bdevs_operational": 3, 00:19:00.026 "base_bdevs_list": [ 00:19:00.026 { 00:19:00.026 "name": null, 00:19:00.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.026 "is_configured": false, 00:19:00.026 "data_offset": 0, 00:19:00.026 "data_size": 65536 00:19:00.026 }, 00:19:00.026 { 00:19:00.026 "name": "BaseBdev2", 00:19:00.026 "uuid": "7a961173-7374-43e3-8d8f-9b17b4bb6ce1", 00:19:00.026 "is_configured": true, 00:19:00.026 "data_offset": 0, 00:19:00.026 "data_size": 65536 00:19:00.026 }, 00:19:00.026 { 00:19:00.026 "name": "BaseBdev3", 00:19:00.026 "uuid": "b5b7f230-b162-4db1-84c9-98cbd687653a", 00:19:00.026 "is_configured": true, 00:19:00.026 "data_offset": 0, 00:19:00.026 "data_size": 65536 00:19:00.026 }, 00:19:00.026 { 00:19:00.026 "name": "BaseBdev4", 00:19:00.026 "uuid": "92629bbb-124c-4be8-8ac6-d09ad066cef4", 00:19:00.026 "is_configured": true, 00:19:00.026 "data_offset": 0, 00:19:00.026 "data_size": 65536 00:19:00.026 } 00:19:00.026 ] 00:19:00.026 }' 00:19:00.026 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.026 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.593 [2024-12-06 15:45:43.667743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:00.593 [2024-12-06 15:45:43.668013] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:00.593 [2024-12-06 15:45:43.775490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.593 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.593 [2024-12-06 15:45:43.827456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:00.852 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.852 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:00.852 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:00.852 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.853 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:00.853 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.853 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.853 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.853 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:00.853 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:00.853 15:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:00.853 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.853 15:45:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.853 [2024-12-06 15:45:43.986433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:00.853 [2024-12-06 15:45:43.986514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.853 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.112 BaseBdev2 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.112 [ 00:19:01.112 { 00:19:01.112 "name": "BaseBdev2", 00:19:01.112 "aliases": [ 00:19:01.112 "b06b7845-e7f1-4ff1-b46a-8734ceb6cfe5" 00:19:01.112 ], 00:19:01.112 "product_name": "Malloc disk", 00:19:01.112 "block_size": 512, 00:19:01.112 "num_blocks": 65536, 00:19:01.112 "uuid": "b06b7845-e7f1-4ff1-b46a-8734ceb6cfe5", 00:19:01.112 "assigned_rate_limits": { 00:19:01.112 "rw_ios_per_sec": 0, 00:19:01.112 "rw_mbytes_per_sec": 0, 00:19:01.112 "r_mbytes_per_sec": 0, 00:19:01.112 "w_mbytes_per_sec": 0 00:19:01.112 }, 00:19:01.112 "claimed": false, 00:19:01.112 "zoned": false, 00:19:01.112 "supported_io_types": { 00:19:01.112 "read": true, 00:19:01.112 "write": true, 00:19:01.112 "unmap": true, 00:19:01.112 "flush": true, 00:19:01.112 "reset": true, 00:19:01.112 "nvme_admin": false, 00:19:01.112 "nvme_io": false, 00:19:01.112 "nvme_io_md": false, 00:19:01.112 "write_zeroes": true, 00:19:01.112 "zcopy": true, 00:19:01.112 "get_zone_info": false, 00:19:01.112 "zone_management": false, 00:19:01.112 "zone_append": false, 00:19:01.112 "compare": false, 00:19:01.112 "compare_and_write": false, 00:19:01.112 "abort": true, 00:19:01.112 "seek_hole": false, 00:19:01.112 "seek_data": false, 00:19:01.112 "copy": true, 00:19:01.112 "nvme_iov_md": false 00:19:01.112 }, 00:19:01.112 "memory_domains": [ 00:19:01.112 { 00:19:01.112 "dma_device_id": "system", 00:19:01.112 "dma_device_type": 1 00:19:01.112 }, 00:19:01.112 { 00:19:01.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.112 "dma_device_type": 2 00:19:01.112 } 00:19:01.112 ], 00:19:01.112 "driver_specific": {} 00:19:01.112 } 00:19:01.112 ] 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.112 BaseBdev3 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.112 [ 00:19:01.112 { 00:19:01.112 "name": "BaseBdev3", 00:19:01.112 "aliases": [ 00:19:01.112 "d8fb14b9-24fc-48d3-b956-fc0458923530" 00:19:01.112 ], 00:19:01.112 "product_name": "Malloc disk", 00:19:01.112 "block_size": 512, 00:19:01.112 "num_blocks": 65536, 00:19:01.112 "uuid": "d8fb14b9-24fc-48d3-b956-fc0458923530", 00:19:01.112 "assigned_rate_limits": { 00:19:01.112 "rw_ios_per_sec": 0, 00:19:01.112 "rw_mbytes_per_sec": 0, 00:19:01.112 "r_mbytes_per_sec": 0, 00:19:01.112 "w_mbytes_per_sec": 0 00:19:01.112 }, 00:19:01.112 "claimed": false, 00:19:01.112 "zoned": false, 00:19:01.112 "supported_io_types": { 00:19:01.112 "read": true, 00:19:01.112 "write": true, 00:19:01.112 "unmap": true, 00:19:01.112 "flush": true, 00:19:01.112 "reset": true, 00:19:01.112 "nvme_admin": false, 00:19:01.112 "nvme_io": false, 00:19:01.112 "nvme_io_md": false, 00:19:01.112 "write_zeroes": true, 00:19:01.112 "zcopy": true, 00:19:01.112 "get_zone_info": false, 00:19:01.112 "zone_management": false, 00:19:01.112 "zone_append": false, 00:19:01.112 "compare": false, 00:19:01.112 "compare_and_write": false, 00:19:01.112 "abort": true, 00:19:01.112 "seek_hole": false, 00:19:01.112 "seek_data": false, 00:19:01.112 "copy": true, 00:19:01.112 "nvme_iov_md": false 00:19:01.112 }, 00:19:01.112 "memory_domains": [ 00:19:01.112 { 00:19:01.112 "dma_device_id": "system", 00:19:01.112 "dma_device_type": 1 00:19:01.112 }, 00:19:01.112 { 00:19:01.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.112 "dma_device_type": 2 00:19:01.112 } 00:19:01.112 ], 00:19:01.112 "driver_specific": {} 00:19:01.112 } 00:19:01.112 ] 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.112 BaseBdev4 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:01.112 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:01.113 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:01.113 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.113 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.113 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.113 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:01.113 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.113 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.371 [ 00:19:01.371 { 00:19:01.371 "name": "BaseBdev4", 00:19:01.371 "aliases": [ 00:19:01.371 "eeb51faa-cfea-4914-a969-ad62e6f33e92" 00:19:01.371 ], 00:19:01.371 "product_name": "Malloc disk", 00:19:01.371 "block_size": 512, 00:19:01.371 "num_blocks": 65536, 00:19:01.371 "uuid": "eeb51faa-cfea-4914-a969-ad62e6f33e92", 00:19:01.371 "assigned_rate_limits": { 00:19:01.371 "rw_ios_per_sec": 0, 00:19:01.371 "rw_mbytes_per_sec": 0, 00:19:01.371 "r_mbytes_per_sec": 0, 00:19:01.371 "w_mbytes_per_sec": 0 00:19:01.371 }, 00:19:01.371 "claimed": false, 00:19:01.371 "zoned": false, 00:19:01.371 "supported_io_types": { 00:19:01.371 "read": true, 00:19:01.371 "write": true, 00:19:01.371 "unmap": true, 00:19:01.371 "flush": true, 00:19:01.371 "reset": true, 00:19:01.371 "nvme_admin": false, 00:19:01.371 "nvme_io": false, 00:19:01.371 "nvme_io_md": false, 00:19:01.371 "write_zeroes": true, 00:19:01.371 "zcopy": true, 00:19:01.371 "get_zone_info": false, 00:19:01.371 "zone_management": false, 00:19:01.371 "zone_append": false, 00:19:01.371 "compare": false, 00:19:01.371 "compare_and_write": false, 00:19:01.371 "abort": true, 00:19:01.371 "seek_hole": false, 00:19:01.371 "seek_data": false, 00:19:01.372 "copy": true, 00:19:01.372 "nvme_iov_md": false 00:19:01.372 }, 00:19:01.372 "memory_domains": [ 00:19:01.372 { 00:19:01.372 "dma_device_id": "system", 00:19:01.372 "dma_device_type": 1 00:19:01.372 }, 00:19:01.372 { 00:19:01.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.372 "dma_device_type": 2 00:19:01.372 } 00:19:01.372 ], 00:19:01.372 "driver_specific": {} 00:19:01.372 } 00:19:01.372 ] 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.372 [2024-12-06 15:45:44.430112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:01.372 [2024-12-06 15:45:44.430168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:01.372 [2024-12-06 15:45:44.430197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:01.372 [2024-12-06 15:45:44.432609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:01.372 [2024-12-06 15:45:44.432667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.372 "name": "Existed_Raid", 00:19:01.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.372 "strip_size_kb": 64, 00:19:01.372 "state": "configuring", 00:19:01.372 "raid_level": "raid5f", 00:19:01.372 "superblock": false, 00:19:01.372 "num_base_bdevs": 4, 00:19:01.372 "num_base_bdevs_discovered": 3, 00:19:01.372 "num_base_bdevs_operational": 4, 00:19:01.372 "base_bdevs_list": [ 00:19:01.372 { 00:19:01.372 "name": "BaseBdev1", 00:19:01.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.372 "is_configured": false, 00:19:01.372 "data_offset": 0, 00:19:01.372 "data_size": 0 00:19:01.372 }, 00:19:01.372 { 00:19:01.372 "name": "BaseBdev2", 00:19:01.372 "uuid": "b06b7845-e7f1-4ff1-b46a-8734ceb6cfe5", 00:19:01.372 "is_configured": true, 00:19:01.372 "data_offset": 0, 00:19:01.372 "data_size": 65536 00:19:01.372 }, 00:19:01.372 { 00:19:01.372 "name": "BaseBdev3", 00:19:01.372 "uuid": "d8fb14b9-24fc-48d3-b956-fc0458923530", 00:19:01.372 "is_configured": true, 00:19:01.372 "data_offset": 0, 00:19:01.372 "data_size": 65536 00:19:01.372 }, 00:19:01.372 { 00:19:01.372 "name": "BaseBdev4", 00:19:01.372 "uuid": "eeb51faa-cfea-4914-a969-ad62e6f33e92", 00:19:01.372 "is_configured": true, 00:19:01.372 "data_offset": 0, 00:19:01.372 "data_size": 65536 00:19:01.372 } 00:19:01.372 ] 00:19:01.372 }' 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.372 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.631 [2024-12-06 15:45:44.861769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.631 "name": "Existed_Raid", 00:19:01.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.631 "strip_size_kb": 64, 00:19:01.631 "state": "configuring", 00:19:01.631 "raid_level": "raid5f", 00:19:01.631 "superblock": false, 00:19:01.631 "num_base_bdevs": 4, 00:19:01.631 "num_base_bdevs_discovered": 2, 00:19:01.631 "num_base_bdevs_operational": 4, 00:19:01.631 "base_bdevs_list": [ 00:19:01.631 { 00:19:01.631 "name": "BaseBdev1", 00:19:01.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.631 "is_configured": false, 00:19:01.631 "data_offset": 0, 00:19:01.631 "data_size": 0 00:19:01.631 }, 00:19:01.631 { 00:19:01.631 "name": null, 00:19:01.631 "uuid": "b06b7845-e7f1-4ff1-b46a-8734ceb6cfe5", 00:19:01.631 "is_configured": false, 00:19:01.631 "data_offset": 0, 00:19:01.631 "data_size": 65536 00:19:01.631 }, 00:19:01.631 { 00:19:01.631 "name": "BaseBdev3", 00:19:01.631 "uuid": "d8fb14b9-24fc-48d3-b956-fc0458923530", 00:19:01.631 "is_configured": true, 00:19:01.631 "data_offset": 0, 00:19:01.631 "data_size": 65536 00:19:01.631 }, 00:19:01.631 { 00:19:01.631 "name": "BaseBdev4", 00:19:01.631 "uuid": "eeb51faa-cfea-4914-a969-ad62e6f33e92", 00:19:01.631 "is_configured": true, 00:19:01.631 "data_offset": 0, 00:19:01.631 "data_size": 65536 00:19:01.631 } 00:19:01.631 ] 00:19:01.631 }' 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.631 15:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.197 [2024-12-06 15:45:45.383954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:02.197 BaseBdev1 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.197 [ 00:19:02.197 { 00:19:02.197 "name": "BaseBdev1", 00:19:02.197 "aliases": [ 00:19:02.197 "703a6335-f83f-496a-b8a1-f377e4170497" 00:19:02.197 ], 00:19:02.197 "product_name": "Malloc disk", 00:19:02.197 "block_size": 512, 00:19:02.197 "num_blocks": 65536, 00:19:02.197 "uuid": "703a6335-f83f-496a-b8a1-f377e4170497", 00:19:02.197 "assigned_rate_limits": { 00:19:02.197 "rw_ios_per_sec": 0, 00:19:02.197 "rw_mbytes_per_sec": 0, 00:19:02.197 "r_mbytes_per_sec": 0, 00:19:02.197 "w_mbytes_per_sec": 0 00:19:02.197 }, 00:19:02.197 "claimed": true, 00:19:02.197 "claim_type": "exclusive_write", 00:19:02.197 "zoned": false, 00:19:02.197 "supported_io_types": { 00:19:02.197 "read": true, 00:19:02.197 "write": true, 00:19:02.197 "unmap": true, 00:19:02.197 "flush": true, 00:19:02.197 "reset": true, 00:19:02.197 "nvme_admin": false, 00:19:02.197 "nvme_io": false, 00:19:02.197 "nvme_io_md": false, 00:19:02.197 "write_zeroes": true, 00:19:02.197 "zcopy": true, 00:19:02.197 "get_zone_info": false, 00:19:02.197 "zone_management": false, 00:19:02.197 "zone_append": false, 00:19:02.197 "compare": false, 00:19:02.197 "compare_and_write": false, 00:19:02.197 "abort": true, 00:19:02.197 "seek_hole": false, 00:19:02.197 "seek_data": false, 00:19:02.197 "copy": true, 00:19:02.197 "nvme_iov_md": false 00:19:02.197 }, 00:19:02.197 "memory_domains": [ 00:19:02.197 { 00:19:02.197 "dma_device_id": "system", 00:19:02.197 "dma_device_type": 1 00:19:02.197 }, 00:19:02.197 { 00:19:02.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.197 "dma_device_type": 2 00:19:02.197 } 00:19:02.197 ], 00:19:02.197 "driver_specific": {} 00:19:02.197 } 00:19:02.197 ] 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.197 "name": "Existed_Raid", 00:19:02.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.197 "strip_size_kb": 64, 00:19:02.197 "state": "configuring", 00:19:02.197 "raid_level": "raid5f", 00:19:02.197 "superblock": false, 00:19:02.197 "num_base_bdevs": 4, 00:19:02.197 "num_base_bdevs_discovered": 3, 00:19:02.197 "num_base_bdevs_operational": 4, 00:19:02.197 "base_bdevs_list": [ 00:19:02.197 { 00:19:02.197 "name": "BaseBdev1", 00:19:02.197 "uuid": "703a6335-f83f-496a-b8a1-f377e4170497", 00:19:02.197 "is_configured": true, 00:19:02.197 "data_offset": 0, 00:19:02.197 "data_size": 65536 00:19:02.197 }, 00:19:02.197 { 00:19:02.197 "name": null, 00:19:02.197 "uuid": "b06b7845-e7f1-4ff1-b46a-8734ceb6cfe5", 00:19:02.197 "is_configured": false, 00:19:02.197 "data_offset": 0, 00:19:02.197 "data_size": 65536 00:19:02.197 }, 00:19:02.197 { 00:19:02.197 "name": "BaseBdev3", 00:19:02.197 "uuid": "d8fb14b9-24fc-48d3-b956-fc0458923530", 00:19:02.197 "is_configured": true, 00:19:02.197 "data_offset": 0, 00:19:02.197 "data_size": 65536 00:19:02.197 }, 00:19:02.197 { 00:19:02.197 "name": "BaseBdev4", 00:19:02.197 "uuid": "eeb51faa-cfea-4914-a969-ad62e6f33e92", 00:19:02.197 "is_configured": true, 00:19:02.197 "data_offset": 0, 00:19:02.197 "data_size": 65536 00:19:02.197 } 00:19:02.197 ] 00:19:02.197 }' 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.197 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.766 [2024-12-06 15:45:45.867355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.766 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.766 "name": "Existed_Raid", 00:19:02.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.767 "strip_size_kb": 64, 00:19:02.767 "state": "configuring", 00:19:02.767 "raid_level": "raid5f", 00:19:02.767 "superblock": false, 00:19:02.767 "num_base_bdevs": 4, 00:19:02.767 "num_base_bdevs_discovered": 2, 00:19:02.767 "num_base_bdevs_operational": 4, 00:19:02.767 "base_bdevs_list": [ 00:19:02.767 { 00:19:02.767 "name": "BaseBdev1", 00:19:02.767 "uuid": "703a6335-f83f-496a-b8a1-f377e4170497", 00:19:02.767 "is_configured": true, 00:19:02.767 "data_offset": 0, 00:19:02.767 "data_size": 65536 00:19:02.767 }, 00:19:02.767 { 00:19:02.767 "name": null, 00:19:02.767 "uuid": "b06b7845-e7f1-4ff1-b46a-8734ceb6cfe5", 00:19:02.767 "is_configured": false, 00:19:02.767 "data_offset": 0, 00:19:02.767 "data_size": 65536 00:19:02.767 }, 00:19:02.767 { 00:19:02.767 "name": null, 00:19:02.767 "uuid": "d8fb14b9-24fc-48d3-b956-fc0458923530", 00:19:02.767 "is_configured": false, 00:19:02.767 "data_offset": 0, 00:19:02.767 "data_size": 65536 00:19:02.767 }, 00:19:02.767 { 00:19:02.767 "name": "BaseBdev4", 00:19:02.767 "uuid": "eeb51faa-cfea-4914-a969-ad62e6f33e92", 00:19:02.767 "is_configured": true, 00:19:02.767 "data_offset": 0, 00:19:02.767 "data_size": 65536 00:19:02.767 } 00:19:02.767 ] 00:19:02.767 }' 00:19:02.767 15:45:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.767 15:45:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.052 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.052 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.052 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:03.052 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.052 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.311 [2024-12-06 15:45:46.354662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.311 "name": "Existed_Raid", 00:19:03.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.311 "strip_size_kb": 64, 00:19:03.311 "state": "configuring", 00:19:03.311 "raid_level": "raid5f", 00:19:03.311 "superblock": false, 00:19:03.311 "num_base_bdevs": 4, 00:19:03.311 "num_base_bdevs_discovered": 3, 00:19:03.311 "num_base_bdevs_operational": 4, 00:19:03.311 "base_bdevs_list": [ 00:19:03.311 { 00:19:03.311 "name": "BaseBdev1", 00:19:03.311 "uuid": "703a6335-f83f-496a-b8a1-f377e4170497", 00:19:03.311 "is_configured": true, 00:19:03.311 "data_offset": 0, 00:19:03.311 "data_size": 65536 00:19:03.311 }, 00:19:03.311 { 00:19:03.311 "name": null, 00:19:03.311 "uuid": "b06b7845-e7f1-4ff1-b46a-8734ceb6cfe5", 00:19:03.311 "is_configured": false, 00:19:03.311 "data_offset": 0, 00:19:03.311 "data_size": 65536 00:19:03.311 }, 00:19:03.311 { 00:19:03.311 "name": "BaseBdev3", 00:19:03.311 "uuid": "d8fb14b9-24fc-48d3-b956-fc0458923530", 00:19:03.311 "is_configured": true, 00:19:03.311 "data_offset": 0, 00:19:03.311 "data_size": 65536 00:19:03.311 }, 00:19:03.311 { 00:19:03.311 "name": "BaseBdev4", 00:19:03.311 "uuid": "eeb51faa-cfea-4914-a969-ad62e6f33e92", 00:19:03.311 "is_configured": true, 00:19:03.311 "data_offset": 0, 00:19:03.311 "data_size": 65536 00:19:03.311 } 00:19:03.311 ] 00:19:03.311 }' 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.311 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.569 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:03.569 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.569 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.569 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.569 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.569 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:03.569 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:03.569 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.569 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.569 [2024-12-06 15:45:46.794121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.828 "name": "Existed_Raid", 00:19:03.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.828 "strip_size_kb": 64, 00:19:03.828 "state": "configuring", 00:19:03.828 "raid_level": "raid5f", 00:19:03.828 "superblock": false, 00:19:03.828 "num_base_bdevs": 4, 00:19:03.828 "num_base_bdevs_discovered": 2, 00:19:03.828 "num_base_bdevs_operational": 4, 00:19:03.828 "base_bdevs_list": [ 00:19:03.828 { 00:19:03.828 "name": null, 00:19:03.828 "uuid": "703a6335-f83f-496a-b8a1-f377e4170497", 00:19:03.828 "is_configured": false, 00:19:03.828 "data_offset": 0, 00:19:03.828 "data_size": 65536 00:19:03.828 }, 00:19:03.828 { 00:19:03.828 "name": null, 00:19:03.828 "uuid": "b06b7845-e7f1-4ff1-b46a-8734ceb6cfe5", 00:19:03.828 "is_configured": false, 00:19:03.828 "data_offset": 0, 00:19:03.828 "data_size": 65536 00:19:03.828 }, 00:19:03.828 { 00:19:03.828 "name": "BaseBdev3", 00:19:03.828 "uuid": "d8fb14b9-24fc-48d3-b956-fc0458923530", 00:19:03.828 "is_configured": true, 00:19:03.828 "data_offset": 0, 00:19:03.828 "data_size": 65536 00:19:03.828 }, 00:19:03.828 { 00:19:03.828 "name": "BaseBdev4", 00:19:03.828 "uuid": "eeb51faa-cfea-4914-a969-ad62e6f33e92", 00:19:03.828 "is_configured": true, 00:19:03.828 "data_offset": 0, 00:19:03.828 "data_size": 65536 00:19:03.828 } 00:19:03.828 ] 00:19:03.828 }' 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.828 15:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.087 [2024-12-06 15:45:47.359390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.087 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.347 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.347 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.347 "name": "Existed_Raid", 00:19:04.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.347 "strip_size_kb": 64, 00:19:04.347 "state": "configuring", 00:19:04.347 "raid_level": "raid5f", 00:19:04.347 "superblock": false, 00:19:04.347 "num_base_bdevs": 4, 00:19:04.347 "num_base_bdevs_discovered": 3, 00:19:04.347 "num_base_bdevs_operational": 4, 00:19:04.347 "base_bdevs_list": [ 00:19:04.347 { 00:19:04.347 "name": null, 00:19:04.347 "uuid": "703a6335-f83f-496a-b8a1-f377e4170497", 00:19:04.347 "is_configured": false, 00:19:04.347 "data_offset": 0, 00:19:04.347 "data_size": 65536 00:19:04.347 }, 00:19:04.347 { 00:19:04.347 "name": "BaseBdev2", 00:19:04.347 "uuid": "b06b7845-e7f1-4ff1-b46a-8734ceb6cfe5", 00:19:04.347 "is_configured": true, 00:19:04.347 "data_offset": 0, 00:19:04.347 "data_size": 65536 00:19:04.347 }, 00:19:04.347 { 00:19:04.347 "name": "BaseBdev3", 00:19:04.347 "uuid": "d8fb14b9-24fc-48d3-b956-fc0458923530", 00:19:04.347 "is_configured": true, 00:19:04.347 "data_offset": 0, 00:19:04.347 "data_size": 65536 00:19:04.347 }, 00:19:04.347 { 00:19:04.347 "name": "BaseBdev4", 00:19:04.347 "uuid": "eeb51faa-cfea-4914-a969-ad62e6f33e92", 00:19:04.347 "is_configured": true, 00:19:04.347 "data_offset": 0, 00:19:04.347 "data_size": 65536 00:19:04.347 } 00:19:04.347 ] 00:19:04.347 }' 00:19:04.347 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.347 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 703a6335-f83f-496a-b8a1-f377e4170497 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.607 [2024-12-06 15:45:47.870878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:04.607 [2024-12-06 15:45:47.870949] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:04.607 [2024-12-06 15:45:47.870959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:04.607 [2024-12-06 15:45:47.871262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:04.607 [2024-12-06 15:45:47.878786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:04.607 [2024-12-06 15:45:47.878818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:04.607 [2024-12-06 15:45:47.879131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.607 NewBaseBdev 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.607 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.867 [ 00:19:04.867 { 00:19:04.867 "name": "NewBaseBdev", 00:19:04.867 "aliases": [ 00:19:04.867 "703a6335-f83f-496a-b8a1-f377e4170497" 00:19:04.867 ], 00:19:04.867 "product_name": "Malloc disk", 00:19:04.867 "block_size": 512, 00:19:04.867 "num_blocks": 65536, 00:19:04.867 "uuid": "703a6335-f83f-496a-b8a1-f377e4170497", 00:19:04.867 "assigned_rate_limits": { 00:19:04.867 "rw_ios_per_sec": 0, 00:19:04.867 "rw_mbytes_per_sec": 0, 00:19:04.867 "r_mbytes_per_sec": 0, 00:19:04.867 "w_mbytes_per_sec": 0 00:19:04.867 }, 00:19:04.867 "claimed": true, 00:19:04.867 "claim_type": "exclusive_write", 00:19:04.867 "zoned": false, 00:19:04.867 "supported_io_types": { 00:19:04.867 "read": true, 00:19:04.867 "write": true, 00:19:04.867 "unmap": true, 00:19:04.867 "flush": true, 00:19:04.867 "reset": true, 00:19:04.867 "nvme_admin": false, 00:19:04.867 "nvme_io": false, 00:19:04.868 "nvme_io_md": false, 00:19:04.868 "write_zeroes": true, 00:19:04.868 "zcopy": true, 00:19:04.868 "get_zone_info": false, 00:19:04.868 "zone_management": false, 00:19:04.868 "zone_append": false, 00:19:04.868 "compare": false, 00:19:04.868 "compare_and_write": false, 00:19:04.868 "abort": true, 00:19:04.868 "seek_hole": false, 00:19:04.868 "seek_data": false, 00:19:04.868 "copy": true, 00:19:04.868 "nvme_iov_md": false 00:19:04.868 }, 00:19:04.868 "memory_domains": [ 00:19:04.868 { 00:19:04.868 "dma_device_id": "system", 00:19:04.868 "dma_device_type": 1 00:19:04.868 }, 00:19:04.868 { 00:19:04.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.868 "dma_device_type": 2 00:19:04.868 } 00:19:04.868 ], 00:19:04.868 "driver_specific": {} 00:19:04.868 } 00:19:04.868 ] 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.868 "name": "Existed_Raid", 00:19:04.868 "uuid": "418c1a74-161b-4a4d-98dc-dad2a12549f1", 00:19:04.868 "strip_size_kb": 64, 00:19:04.868 "state": "online", 00:19:04.868 "raid_level": "raid5f", 00:19:04.868 "superblock": false, 00:19:04.868 "num_base_bdevs": 4, 00:19:04.868 "num_base_bdevs_discovered": 4, 00:19:04.868 "num_base_bdevs_operational": 4, 00:19:04.868 "base_bdevs_list": [ 00:19:04.868 { 00:19:04.868 "name": "NewBaseBdev", 00:19:04.868 "uuid": "703a6335-f83f-496a-b8a1-f377e4170497", 00:19:04.868 "is_configured": true, 00:19:04.868 "data_offset": 0, 00:19:04.868 "data_size": 65536 00:19:04.868 }, 00:19:04.868 { 00:19:04.868 "name": "BaseBdev2", 00:19:04.868 "uuid": "b06b7845-e7f1-4ff1-b46a-8734ceb6cfe5", 00:19:04.868 "is_configured": true, 00:19:04.868 "data_offset": 0, 00:19:04.868 "data_size": 65536 00:19:04.868 }, 00:19:04.868 { 00:19:04.868 "name": "BaseBdev3", 00:19:04.868 "uuid": "d8fb14b9-24fc-48d3-b956-fc0458923530", 00:19:04.868 "is_configured": true, 00:19:04.868 "data_offset": 0, 00:19:04.868 "data_size": 65536 00:19:04.868 }, 00:19:04.868 { 00:19:04.868 "name": "BaseBdev4", 00:19:04.868 "uuid": "eeb51faa-cfea-4914-a969-ad62e6f33e92", 00:19:04.868 "is_configured": true, 00:19:04.868 "data_offset": 0, 00:19:04.868 "data_size": 65536 00:19:04.868 } 00:19:04.868 ] 00:19:04.868 }' 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.868 15:45:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.128 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:05.128 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:05.128 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:05.128 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:05.128 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:05.128 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:05.128 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:05.128 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:05.128 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.128 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.128 [2024-12-06 15:45:48.344003] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.129 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.129 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:05.129 "name": "Existed_Raid", 00:19:05.129 "aliases": [ 00:19:05.129 "418c1a74-161b-4a4d-98dc-dad2a12549f1" 00:19:05.129 ], 00:19:05.129 "product_name": "Raid Volume", 00:19:05.129 "block_size": 512, 00:19:05.129 "num_blocks": 196608, 00:19:05.129 "uuid": "418c1a74-161b-4a4d-98dc-dad2a12549f1", 00:19:05.129 "assigned_rate_limits": { 00:19:05.129 "rw_ios_per_sec": 0, 00:19:05.129 "rw_mbytes_per_sec": 0, 00:19:05.129 "r_mbytes_per_sec": 0, 00:19:05.129 "w_mbytes_per_sec": 0 00:19:05.129 }, 00:19:05.129 "claimed": false, 00:19:05.129 "zoned": false, 00:19:05.129 "supported_io_types": { 00:19:05.129 "read": true, 00:19:05.129 "write": true, 00:19:05.129 "unmap": false, 00:19:05.129 "flush": false, 00:19:05.129 "reset": true, 00:19:05.129 "nvme_admin": false, 00:19:05.129 "nvme_io": false, 00:19:05.129 "nvme_io_md": false, 00:19:05.129 "write_zeroes": true, 00:19:05.129 "zcopy": false, 00:19:05.129 "get_zone_info": false, 00:19:05.129 "zone_management": false, 00:19:05.129 "zone_append": false, 00:19:05.129 "compare": false, 00:19:05.129 "compare_and_write": false, 00:19:05.129 "abort": false, 00:19:05.129 "seek_hole": false, 00:19:05.129 "seek_data": false, 00:19:05.129 "copy": false, 00:19:05.129 "nvme_iov_md": false 00:19:05.129 }, 00:19:05.129 "driver_specific": { 00:19:05.129 "raid": { 00:19:05.129 "uuid": "418c1a74-161b-4a4d-98dc-dad2a12549f1", 00:19:05.129 "strip_size_kb": 64, 00:19:05.129 "state": "online", 00:19:05.129 "raid_level": "raid5f", 00:19:05.129 "superblock": false, 00:19:05.129 "num_base_bdevs": 4, 00:19:05.129 "num_base_bdevs_discovered": 4, 00:19:05.129 "num_base_bdevs_operational": 4, 00:19:05.129 "base_bdevs_list": [ 00:19:05.129 { 00:19:05.129 "name": "NewBaseBdev", 00:19:05.129 "uuid": "703a6335-f83f-496a-b8a1-f377e4170497", 00:19:05.129 "is_configured": true, 00:19:05.129 "data_offset": 0, 00:19:05.129 "data_size": 65536 00:19:05.129 }, 00:19:05.129 { 00:19:05.129 "name": "BaseBdev2", 00:19:05.129 "uuid": "b06b7845-e7f1-4ff1-b46a-8734ceb6cfe5", 00:19:05.129 "is_configured": true, 00:19:05.129 "data_offset": 0, 00:19:05.129 "data_size": 65536 00:19:05.129 }, 00:19:05.129 { 00:19:05.129 "name": "BaseBdev3", 00:19:05.129 "uuid": "d8fb14b9-24fc-48d3-b956-fc0458923530", 00:19:05.129 "is_configured": true, 00:19:05.129 "data_offset": 0, 00:19:05.129 "data_size": 65536 00:19:05.129 }, 00:19:05.129 { 00:19:05.129 "name": "BaseBdev4", 00:19:05.129 "uuid": "eeb51faa-cfea-4914-a969-ad62e6f33e92", 00:19:05.129 "is_configured": true, 00:19:05.129 "data_offset": 0, 00:19:05.129 "data_size": 65536 00:19:05.129 } 00:19:05.129 ] 00:19:05.129 } 00:19:05.129 } 00:19:05.129 }' 00:19:05.129 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:05.389 BaseBdev2 00:19:05.389 BaseBdev3 00:19:05.389 BaseBdev4' 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.389 [2024-12-06 15:45:48.639357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:05.389 [2024-12-06 15:45:48.639397] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:05.389 [2024-12-06 15:45:48.639483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.389 [2024-12-06 15:45:48.639844] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.389 [2024-12-06 15:45:48.639866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:05.389 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.390 15:45:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82789 00:19:05.390 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82789 ']' 00:19:05.390 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82789 00:19:05.390 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:05.390 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.390 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82789 00:19:05.649 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:05.649 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:05.649 killing process with pid 82789 00:19:05.649 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82789' 00:19:05.649 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82789 00:19:05.649 [2024-12-06 15:45:48.692380] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:05.649 15:45:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82789 00:19:05.908 [2024-12-06 15:45:49.124021] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:07.292 00:19:07.292 real 0m11.347s 00:19:07.292 user 0m17.585s 00:19:07.292 sys 0m2.506s 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.292 ************************************ 00:19:07.292 END TEST raid5f_state_function_test 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.292 ************************************ 00:19:07.292 15:45:50 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:19:07.292 15:45:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:07.292 15:45:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.292 15:45:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.292 ************************************ 00:19:07.292 START TEST raid5f_state_function_test_sb 00:19:07.292 ************************************ 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83455 00:19:07.292 Process raid pid: 83455 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83455' 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83455 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83455 ']' 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.292 15:45:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.292 [2024-12-06 15:45:50.542356] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:19:07.292 [2024-12-06 15:45:50.542497] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.551 [2024-12-06 15:45:50.720912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.810 [2024-12-06 15:45:50.863954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.071 [2024-12-06 15:45:51.111900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:08.071 [2024-12-06 15:45:51.111941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:08.071 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.071 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:08.071 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:08.071 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.071 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.331 [2024-12-06 15:45:51.365708] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:08.331 [2024-12-06 15:45:51.365777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:08.331 [2024-12-06 15:45:51.365789] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:08.331 [2024-12-06 15:45:51.365803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:08.331 [2024-12-06 15:45:51.365811] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:08.331 [2024-12-06 15:45:51.365823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:08.331 [2024-12-06 15:45:51.365831] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:08.331 [2024-12-06 15:45:51.365843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.331 "name": "Existed_Raid", 00:19:08.331 "uuid": "8e44b844-ff5d-470a-80af-c5110f9db05f", 00:19:08.331 "strip_size_kb": 64, 00:19:08.331 "state": "configuring", 00:19:08.331 "raid_level": "raid5f", 00:19:08.331 "superblock": true, 00:19:08.331 "num_base_bdevs": 4, 00:19:08.331 "num_base_bdevs_discovered": 0, 00:19:08.331 "num_base_bdevs_operational": 4, 00:19:08.331 "base_bdevs_list": [ 00:19:08.331 { 00:19:08.331 "name": "BaseBdev1", 00:19:08.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.331 "is_configured": false, 00:19:08.331 "data_offset": 0, 00:19:08.331 "data_size": 0 00:19:08.331 }, 00:19:08.331 { 00:19:08.331 "name": "BaseBdev2", 00:19:08.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.331 "is_configured": false, 00:19:08.331 "data_offset": 0, 00:19:08.331 "data_size": 0 00:19:08.331 }, 00:19:08.331 { 00:19:08.331 "name": "BaseBdev3", 00:19:08.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.331 "is_configured": false, 00:19:08.331 "data_offset": 0, 00:19:08.331 "data_size": 0 00:19:08.331 }, 00:19:08.331 { 00:19:08.331 "name": "BaseBdev4", 00:19:08.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.331 "is_configured": false, 00:19:08.331 "data_offset": 0, 00:19:08.331 "data_size": 0 00:19:08.331 } 00:19:08.331 ] 00:19:08.331 }' 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.331 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.591 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:08.591 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.591 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.591 [2024-12-06 15:45:51.809019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:08.591 [2024-12-06 15:45:51.809075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:08.591 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.591 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:08.591 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.591 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.591 [2024-12-06 15:45:51.816980] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:08.591 [2024-12-06 15:45:51.817029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:08.591 [2024-12-06 15:45:51.817040] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:08.591 [2024-12-06 15:45:51.817053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:08.591 [2024-12-06 15:45:51.817061] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:08.591 [2024-12-06 15:45:51.817073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:08.591 [2024-12-06 15:45:51.817081] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:08.591 [2024-12-06 15:45:51.817094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:08.591 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.591 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:08.591 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.591 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.591 [2024-12-06 15:45:51.865671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:08.591 BaseBdev1 00:19:08.592 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.592 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:08.592 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:08.592 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:08.592 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:08.592 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:08.592 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:08.592 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:08.592 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.592 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.592 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.592 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:08.592 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.592 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.852 [ 00:19:08.852 { 00:19:08.852 "name": "BaseBdev1", 00:19:08.852 "aliases": [ 00:19:08.852 "1905d394-115b-4ce6-bbfe-0989284f13fd" 00:19:08.852 ], 00:19:08.852 "product_name": "Malloc disk", 00:19:08.852 "block_size": 512, 00:19:08.852 "num_blocks": 65536, 00:19:08.852 "uuid": "1905d394-115b-4ce6-bbfe-0989284f13fd", 00:19:08.852 "assigned_rate_limits": { 00:19:08.852 "rw_ios_per_sec": 0, 00:19:08.852 "rw_mbytes_per_sec": 0, 00:19:08.852 "r_mbytes_per_sec": 0, 00:19:08.852 "w_mbytes_per_sec": 0 00:19:08.852 }, 00:19:08.852 "claimed": true, 00:19:08.852 "claim_type": "exclusive_write", 00:19:08.852 "zoned": false, 00:19:08.852 "supported_io_types": { 00:19:08.852 "read": true, 00:19:08.852 "write": true, 00:19:08.852 "unmap": true, 00:19:08.852 "flush": true, 00:19:08.852 "reset": true, 00:19:08.852 "nvme_admin": false, 00:19:08.852 "nvme_io": false, 00:19:08.852 "nvme_io_md": false, 00:19:08.852 "write_zeroes": true, 00:19:08.852 "zcopy": true, 00:19:08.852 "get_zone_info": false, 00:19:08.852 "zone_management": false, 00:19:08.852 "zone_append": false, 00:19:08.852 "compare": false, 00:19:08.852 "compare_and_write": false, 00:19:08.852 "abort": true, 00:19:08.852 "seek_hole": false, 00:19:08.852 "seek_data": false, 00:19:08.852 "copy": true, 00:19:08.852 "nvme_iov_md": false 00:19:08.852 }, 00:19:08.852 "memory_domains": [ 00:19:08.852 { 00:19:08.852 "dma_device_id": "system", 00:19:08.852 "dma_device_type": 1 00:19:08.852 }, 00:19:08.852 { 00:19:08.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.852 "dma_device_type": 2 00:19:08.852 } 00:19:08.852 ], 00:19:08.852 "driver_specific": {} 00:19:08.852 } 00:19:08.852 ] 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.852 "name": "Existed_Raid", 00:19:08.852 "uuid": "4328cf03-e08e-4d5d-bfd8-a53daad867ff", 00:19:08.852 "strip_size_kb": 64, 00:19:08.852 "state": "configuring", 00:19:08.852 "raid_level": "raid5f", 00:19:08.852 "superblock": true, 00:19:08.852 "num_base_bdevs": 4, 00:19:08.852 "num_base_bdevs_discovered": 1, 00:19:08.852 "num_base_bdevs_operational": 4, 00:19:08.852 "base_bdevs_list": [ 00:19:08.852 { 00:19:08.852 "name": "BaseBdev1", 00:19:08.852 "uuid": "1905d394-115b-4ce6-bbfe-0989284f13fd", 00:19:08.852 "is_configured": true, 00:19:08.852 "data_offset": 2048, 00:19:08.852 "data_size": 63488 00:19:08.852 }, 00:19:08.852 { 00:19:08.852 "name": "BaseBdev2", 00:19:08.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.852 "is_configured": false, 00:19:08.852 "data_offset": 0, 00:19:08.852 "data_size": 0 00:19:08.852 }, 00:19:08.852 { 00:19:08.852 "name": "BaseBdev3", 00:19:08.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.852 "is_configured": false, 00:19:08.852 "data_offset": 0, 00:19:08.852 "data_size": 0 00:19:08.852 }, 00:19:08.852 { 00:19:08.852 "name": "BaseBdev4", 00:19:08.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.852 "is_configured": false, 00:19:08.852 "data_offset": 0, 00:19:08.852 "data_size": 0 00:19:08.852 } 00:19:08.852 ] 00:19:08.852 }' 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.852 15:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.112 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.113 [2024-12-06 15:45:52.349093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:09.113 [2024-12-06 15:45:52.349169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.113 [2024-12-06 15:45:52.361147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:09.113 [2024-12-06 15:45:52.363595] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:09.113 [2024-12-06 15:45:52.363644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:09.113 [2024-12-06 15:45:52.363656] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:09.113 [2024-12-06 15:45:52.363670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:09.113 [2024-12-06 15:45:52.363678] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:09.113 [2024-12-06 15:45:52.363690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.113 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.372 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.372 "name": "Existed_Raid", 00:19:09.372 "uuid": "6ab216ab-da83-4ae6-8569-03cdb6422348", 00:19:09.372 "strip_size_kb": 64, 00:19:09.372 "state": "configuring", 00:19:09.372 "raid_level": "raid5f", 00:19:09.372 "superblock": true, 00:19:09.372 "num_base_bdevs": 4, 00:19:09.372 "num_base_bdevs_discovered": 1, 00:19:09.372 "num_base_bdevs_operational": 4, 00:19:09.372 "base_bdevs_list": [ 00:19:09.372 { 00:19:09.372 "name": "BaseBdev1", 00:19:09.372 "uuid": "1905d394-115b-4ce6-bbfe-0989284f13fd", 00:19:09.372 "is_configured": true, 00:19:09.372 "data_offset": 2048, 00:19:09.373 "data_size": 63488 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "name": "BaseBdev2", 00:19:09.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.373 "is_configured": false, 00:19:09.373 "data_offset": 0, 00:19:09.373 "data_size": 0 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "name": "BaseBdev3", 00:19:09.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.373 "is_configured": false, 00:19:09.373 "data_offset": 0, 00:19:09.373 "data_size": 0 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "name": "BaseBdev4", 00:19:09.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.373 "is_configured": false, 00:19:09.373 "data_offset": 0, 00:19:09.373 "data_size": 0 00:19:09.373 } 00:19:09.373 ] 00:19:09.373 }' 00:19:09.373 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.373 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.632 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:09.632 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.632 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.632 [2024-12-06 15:45:52.774646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:09.632 BaseBdev2 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.633 [ 00:19:09.633 { 00:19:09.633 "name": "BaseBdev2", 00:19:09.633 "aliases": [ 00:19:09.633 "ccd9d6f4-6148-43ae-8eaa-99d9207c3438" 00:19:09.633 ], 00:19:09.633 "product_name": "Malloc disk", 00:19:09.633 "block_size": 512, 00:19:09.633 "num_blocks": 65536, 00:19:09.633 "uuid": "ccd9d6f4-6148-43ae-8eaa-99d9207c3438", 00:19:09.633 "assigned_rate_limits": { 00:19:09.633 "rw_ios_per_sec": 0, 00:19:09.633 "rw_mbytes_per_sec": 0, 00:19:09.633 "r_mbytes_per_sec": 0, 00:19:09.633 "w_mbytes_per_sec": 0 00:19:09.633 }, 00:19:09.633 "claimed": true, 00:19:09.633 "claim_type": "exclusive_write", 00:19:09.633 "zoned": false, 00:19:09.633 "supported_io_types": { 00:19:09.633 "read": true, 00:19:09.633 "write": true, 00:19:09.633 "unmap": true, 00:19:09.633 "flush": true, 00:19:09.633 "reset": true, 00:19:09.633 "nvme_admin": false, 00:19:09.633 "nvme_io": false, 00:19:09.633 "nvme_io_md": false, 00:19:09.633 "write_zeroes": true, 00:19:09.633 "zcopy": true, 00:19:09.633 "get_zone_info": false, 00:19:09.633 "zone_management": false, 00:19:09.633 "zone_append": false, 00:19:09.633 "compare": false, 00:19:09.633 "compare_and_write": false, 00:19:09.633 "abort": true, 00:19:09.633 "seek_hole": false, 00:19:09.633 "seek_data": false, 00:19:09.633 "copy": true, 00:19:09.633 "nvme_iov_md": false 00:19:09.633 }, 00:19:09.633 "memory_domains": [ 00:19:09.633 { 00:19:09.633 "dma_device_id": "system", 00:19:09.633 "dma_device_type": 1 00:19:09.633 }, 00:19:09.633 { 00:19:09.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.633 "dma_device_type": 2 00:19:09.633 } 00:19:09.633 ], 00:19:09.633 "driver_specific": {} 00:19:09.633 } 00:19:09.633 ] 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.633 "name": "Existed_Raid", 00:19:09.633 "uuid": "6ab216ab-da83-4ae6-8569-03cdb6422348", 00:19:09.633 "strip_size_kb": 64, 00:19:09.633 "state": "configuring", 00:19:09.633 "raid_level": "raid5f", 00:19:09.633 "superblock": true, 00:19:09.633 "num_base_bdevs": 4, 00:19:09.633 "num_base_bdevs_discovered": 2, 00:19:09.633 "num_base_bdevs_operational": 4, 00:19:09.633 "base_bdevs_list": [ 00:19:09.633 { 00:19:09.633 "name": "BaseBdev1", 00:19:09.633 "uuid": "1905d394-115b-4ce6-bbfe-0989284f13fd", 00:19:09.633 "is_configured": true, 00:19:09.633 "data_offset": 2048, 00:19:09.633 "data_size": 63488 00:19:09.633 }, 00:19:09.633 { 00:19:09.633 "name": "BaseBdev2", 00:19:09.633 "uuid": "ccd9d6f4-6148-43ae-8eaa-99d9207c3438", 00:19:09.633 "is_configured": true, 00:19:09.633 "data_offset": 2048, 00:19:09.633 "data_size": 63488 00:19:09.633 }, 00:19:09.633 { 00:19:09.633 "name": "BaseBdev3", 00:19:09.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.633 "is_configured": false, 00:19:09.633 "data_offset": 0, 00:19:09.633 "data_size": 0 00:19:09.633 }, 00:19:09.633 { 00:19:09.633 "name": "BaseBdev4", 00:19:09.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.633 "is_configured": false, 00:19:09.633 "data_offset": 0, 00:19:09.633 "data_size": 0 00:19:09.633 } 00:19:09.633 ] 00:19:09.633 }' 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.633 15:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.203 [2024-12-06 15:45:53.316759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:10.203 BaseBdev3 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.203 [ 00:19:10.203 { 00:19:10.203 "name": "BaseBdev3", 00:19:10.203 "aliases": [ 00:19:10.203 "0aaef280-7a66-4097-b683-c04e369a2026" 00:19:10.203 ], 00:19:10.203 "product_name": "Malloc disk", 00:19:10.203 "block_size": 512, 00:19:10.203 "num_blocks": 65536, 00:19:10.203 "uuid": "0aaef280-7a66-4097-b683-c04e369a2026", 00:19:10.203 "assigned_rate_limits": { 00:19:10.203 "rw_ios_per_sec": 0, 00:19:10.203 "rw_mbytes_per_sec": 0, 00:19:10.203 "r_mbytes_per_sec": 0, 00:19:10.203 "w_mbytes_per_sec": 0 00:19:10.203 }, 00:19:10.203 "claimed": true, 00:19:10.203 "claim_type": "exclusive_write", 00:19:10.203 "zoned": false, 00:19:10.203 "supported_io_types": { 00:19:10.203 "read": true, 00:19:10.203 "write": true, 00:19:10.203 "unmap": true, 00:19:10.203 "flush": true, 00:19:10.203 "reset": true, 00:19:10.203 "nvme_admin": false, 00:19:10.203 "nvme_io": false, 00:19:10.203 "nvme_io_md": false, 00:19:10.203 "write_zeroes": true, 00:19:10.203 "zcopy": true, 00:19:10.203 "get_zone_info": false, 00:19:10.203 "zone_management": false, 00:19:10.203 "zone_append": false, 00:19:10.203 "compare": false, 00:19:10.203 "compare_and_write": false, 00:19:10.203 "abort": true, 00:19:10.203 "seek_hole": false, 00:19:10.203 "seek_data": false, 00:19:10.203 "copy": true, 00:19:10.203 "nvme_iov_md": false 00:19:10.203 }, 00:19:10.203 "memory_domains": [ 00:19:10.203 { 00:19:10.203 "dma_device_id": "system", 00:19:10.203 "dma_device_type": 1 00:19:10.203 }, 00:19:10.203 { 00:19:10.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.203 "dma_device_type": 2 00:19:10.203 } 00:19:10.203 ], 00:19:10.203 "driver_specific": {} 00:19:10.203 } 00:19:10.203 ] 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.203 "name": "Existed_Raid", 00:19:10.203 "uuid": "6ab216ab-da83-4ae6-8569-03cdb6422348", 00:19:10.203 "strip_size_kb": 64, 00:19:10.203 "state": "configuring", 00:19:10.203 "raid_level": "raid5f", 00:19:10.203 "superblock": true, 00:19:10.203 "num_base_bdevs": 4, 00:19:10.203 "num_base_bdevs_discovered": 3, 00:19:10.203 "num_base_bdevs_operational": 4, 00:19:10.203 "base_bdevs_list": [ 00:19:10.203 { 00:19:10.203 "name": "BaseBdev1", 00:19:10.203 "uuid": "1905d394-115b-4ce6-bbfe-0989284f13fd", 00:19:10.203 "is_configured": true, 00:19:10.203 "data_offset": 2048, 00:19:10.203 "data_size": 63488 00:19:10.203 }, 00:19:10.203 { 00:19:10.203 "name": "BaseBdev2", 00:19:10.203 "uuid": "ccd9d6f4-6148-43ae-8eaa-99d9207c3438", 00:19:10.203 "is_configured": true, 00:19:10.203 "data_offset": 2048, 00:19:10.203 "data_size": 63488 00:19:10.203 }, 00:19:10.203 { 00:19:10.203 "name": "BaseBdev3", 00:19:10.203 "uuid": "0aaef280-7a66-4097-b683-c04e369a2026", 00:19:10.203 "is_configured": true, 00:19:10.203 "data_offset": 2048, 00:19:10.203 "data_size": 63488 00:19:10.203 }, 00:19:10.203 { 00:19:10.203 "name": "BaseBdev4", 00:19:10.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.203 "is_configured": false, 00:19:10.203 "data_offset": 0, 00:19:10.203 "data_size": 0 00:19:10.203 } 00:19:10.203 ] 00:19:10.203 }' 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.203 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.772 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:10.772 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.772 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.772 [2024-12-06 15:45:53.814714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:10.772 [2024-12-06 15:45:53.815062] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:10.772 [2024-12-06 15:45:53.815082] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:10.772 [2024-12-06 15:45:53.815417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:10.772 BaseBdev4 00:19:10.772 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.772 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:10.772 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:10.772 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:10.772 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:10.772 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:10.772 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:10.772 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:10.772 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.772 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.772 [2024-12-06 15:45:53.822986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:10.772 [2024-12-06 15:45:53.823021] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:10.772 [2024-12-06 15:45:53.823301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.772 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.773 [ 00:19:10.773 { 00:19:10.773 "name": "BaseBdev4", 00:19:10.773 "aliases": [ 00:19:10.773 "b0fb44ee-3cbe-4006-a1a5-5c765320a701" 00:19:10.773 ], 00:19:10.773 "product_name": "Malloc disk", 00:19:10.773 "block_size": 512, 00:19:10.773 "num_blocks": 65536, 00:19:10.773 "uuid": "b0fb44ee-3cbe-4006-a1a5-5c765320a701", 00:19:10.773 "assigned_rate_limits": { 00:19:10.773 "rw_ios_per_sec": 0, 00:19:10.773 "rw_mbytes_per_sec": 0, 00:19:10.773 "r_mbytes_per_sec": 0, 00:19:10.773 "w_mbytes_per_sec": 0 00:19:10.773 }, 00:19:10.773 "claimed": true, 00:19:10.773 "claim_type": "exclusive_write", 00:19:10.773 "zoned": false, 00:19:10.773 "supported_io_types": { 00:19:10.773 "read": true, 00:19:10.773 "write": true, 00:19:10.773 "unmap": true, 00:19:10.773 "flush": true, 00:19:10.773 "reset": true, 00:19:10.773 "nvme_admin": false, 00:19:10.773 "nvme_io": false, 00:19:10.773 "nvme_io_md": false, 00:19:10.773 "write_zeroes": true, 00:19:10.773 "zcopy": true, 00:19:10.773 "get_zone_info": false, 00:19:10.773 "zone_management": false, 00:19:10.773 "zone_append": false, 00:19:10.773 "compare": false, 00:19:10.773 "compare_and_write": false, 00:19:10.773 "abort": true, 00:19:10.773 "seek_hole": false, 00:19:10.773 "seek_data": false, 00:19:10.773 "copy": true, 00:19:10.773 "nvme_iov_md": false 00:19:10.773 }, 00:19:10.773 "memory_domains": [ 00:19:10.773 { 00:19:10.773 "dma_device_id": "system", 00:19:10.773 "dma_device_type": 1 00:19:10.773 }, 00:19:10.773 { 00:19:10.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.773 "dma_device_type": 2 00:19:10.773 } 00:19:10.773 ], 00:19:10.773 "driver_specific": {} 00:19:10.773 } 00:19:10.773 ] 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.773 "name": "Existed_Raid", 00:19:10.773 "uuid": "6ab216ab-da83-4ae6-8569-03cdb6422348", 00:19:10.773 "strip_size_kb": 64, 00:19:10.773 "state": "online", 00:19:10.773 "raid_level": "raid5f", 00:19:10.773 "superblock": true, 00:19:10.773 "num_base_bdevs": 4, 00:19:10.773 "num_base_bdevs_discovered": 4, 00:19:10.773 "num_base_bdevs_operational": 4, 00:19:10.773 "base_bdevs_list": [ 00:19:10.773 { 00:19:10.773 "name": "BaseBdev1", 00:19:10.773 "uuid": "1905d394-115b-4ce6-bbfe-0989284f13fd", 00:19:10.773 "is_configured": true, 00:19:10.773 "data_offset": 2048, 00:19:10.773 "data_size": 63488 00:19:10.773 }, 00:19:10.773 { 00:19:10.773 "name": "BaseBdev2", 00:19:10.773 "uuid": "ccd9d6f4-6148-43ae-8eaa-99d9207c3438", 00:19:10.773 "is_configured": true, 00:19:10.773 "data_offset": 2048, 00:19:10.773 "data_size": 63488 00:19:10.773 }, 00:19:10.773 { 00:19:10.773 "name": "BaseBdev3", 00:19:10.773 "uuid": "0aaef280-7a66-4097-b683-c04e369a2026", 00:19:10.773 "is_configured": true, 00:19:10.773 "data_offset": 2048, 00:19:10.773 "data_size": 63488 00:19:10.773 }, 00:19:10.773 { 00:19:10.773 "name": "BaseBdev4", 00:19:10.773 "uuid": "b0fb44ee-3cbe-4006-a1a5-5c765320a701", 00:19:10.773 "is_configured": true, 00:19:10.773 "data_offset": 2048, 00:19:10.773 "data_size": 63488 00:19:10.773 } 00:19:10.773 ] 00:19:10.773 }' 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.773 15:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.033 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:11.033 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:11.033 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:11.034 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:11.034 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:11.034 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:11.034 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:11.034 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.034 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.034 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:11.034 [2024-12-06 15:45:54.264211] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.034 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.034 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:11.034 "name": "Existed_Raid", 00:19:11.034 "aliases": [ 00:19:11.034 "6ab216ab-da83-4ae6-8569-03cdb6422348" 00:19:11.034 ], 00:19:11.034 "product_name": "Raid Volume", 00:19:11.034 "block_size": 512, 00:19:11.034 "num_blocks": 190464, 00:19:11.034 "uuid": "6ab216ab-da83-4ae6-8569-03cdb6422348", 00:19:11.034 "assigned_rate_limits": { 00:19:11.034 "rw_ios_per_sec": 0, 00:19:11.034 "rw_mbytes_per_sec": 0, 00:19:11.034 "r_mbytes_per_sec": 0, 00:19:11.034 "w_mbytes_per_sec": 0 00:19:11.034 }, 00:19:11.034 "claimed": false, 00:19:11.034 "zoned": false, 00:19:11.034 "supported_io_types": { 00:19:11.034 "read": true, 00:19:11.034 "write": true, 00:19:11.034 "unmap": false, 00:19:11.034 "flush": false, 00:19:11.034 "reset": true, 00:19:11.034 "nvme_admin": false, 00:19:11.034 "nvme_io": false, 00:19:11.034 "nvme_io_md": false, 00:19:11.034 "write_zeroes": true, 00:19:11.034 "zcopy": false, 00:19:11.034 "get_zone_info": false, 00:19:11.034 "zone_management": false, 00:19:11.034 "zone_append": false, 00:19:11.034 "compare": false, 00:19:11.034 "compare_and_write": false, 00:19:11.034 "abort": false, 00:19:11.034 "seek_hole": false, 00:19:11.034 "seek_data": false, 00:19:11.034 "copy": false, 00:19:11.034 "nvme_iov_md": false 00:19:11.034 }, 00:19:11.034 "driver_specific": { 00:19:11.034 "raid": { 00:19:11.034 "uuid": "6ab216ab-da83-4ae6-8569-03cdb6422348", 00:19:11.034 "strip_size_kb": 64, 00:19:11.034 "state": "online", 00:19:11.034 "raid_level": "raid5f", 00:19:11.034 "superblock": true, 00:19:11.034 "num_base_bdevs": 4, 00:19:11.034 "num_base_bdevs_discovered": 4, 00:19:11.034 "num_base_bdevs_operational": 4, 00:19:11.034 "base_bdevs_list": [ 00:19:11.034 { 00:19:11.034 "name": "BaseBdev1", 00:19:11.034 "uuid": "1905d394-115b-4ce6-bbfe-0989284f13fd", 00:19:11.034 "is_configured": true, 00:19:11.034 "data_offset": 2048, 00:19:11.034 "data_size": 63488 00:19:11.034 }, 00:19:11.034 { 00:19:11.034 "name": "BaseBdev2", 00:19:11.034 "uuid": "ccd9d6f4-6148-43ae-8eaa-99d9207c3438", 00:19:11.034 "is_configured": true, 00:19:11.034 "data_offset": 2048, 00:19:11.034 "data_size": 63488 00:19:11.034 }, 00:19:11.034 { 00:19:11.034 "name": "BaseBdev3", 00:19:11.034 "uuid": "0aaef280-7a66-4097-b683-c04e369a2026", 00:19:11.034 "is_configured": true, 00:19:11.034 "data_offset": 2048, 00:19:11.034 "data_size": 63488 00:19:11.034 }, 00:19:11.034 { 00:19:11.034 "name": "BaseBdev4", 00:19:11.034 "uuid": "b0fb44ee-3cbe-4006-a1a5-5c765320a701", 00:19:11.034 "is_configured": true, 00:19:11.034 "data_offset": 2048, 00:19:11.034 "data_size": 63488 00:19:11.034 } 00:19:11.034 ] 00:19:11.034 } 00:19:11.034 } 00:19:11.034 }' 00:19:11.034 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:11.294 BaseBdev2 00:19:11.294 BaseBdev3 00:19:11.294 BaseBdev4' 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.294 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.294 [2024-12-06 15:45:54.543722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.609 "name": "Existed_Raid", 00:19:11.609 "uuid": "6ab216ab-da83-4ae6-8569-03cdb6422348", 00:19:11.609 "strip_size_kb": 64, 00:19:11.609 "state": "online", 00:19:11.609 "raid_level": "raid5f", 00:19:11.609 "superblock": true, 00:19:11.609 "num_base_bdevs": 4, 00:19:11.609 "num_base_bdevs_discovered": 3, 00:19:11.609 "num_base_bdevs_operational": 3, 00:19:11.609 "base_bdevs_list": [ 00:19:11.609 { 00:19:11.609 "name": null, 00:19:11.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.609 "is_configured": false, 00:19:11.609 "data_offset": 0, 00:19:11.609 "data_size": 63488 00:19:11.609 }, 00:19:11.609 { 00:19:11.609 "name": "BaseBdev2", 00:19:11.609 "uuid": "ccd9d6f4-6148-43ae-8eaa-99d9207c3438", 00:19:11.609 "is_configured": true, 00:19:11.609 "data_offset": 2048, 00:19:11.609 "data_size": 63488 00:19:11.609 }, 00:19:11.609 { 00:19:11.609 "name": "BaseBdev3", 00:19:11.609 "uuid": "0aaef280-7a66-4097-b683-c04e369a2026", 00:19:11.609 "is_configured": true, 00:19:11.609 "data_offset": 2048, 00:19:11.609 "data_size": 63488 00:19:11.609 }, 00:19:11.609 { 00:19:11.609 "name": "BaseBdev4", 00:19:11.609 "uuid": "b0fb44ee-3cbe-4006-a1a5-5c765320a701", 00:19:11.609 "is_configured": true, 00:19:11.609 "data_offset": 2048, 00:19:11.609 "data_size": 63488 00:19:11.609 } 00:19:11.609 ] 00:19:11.609 }' 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.609 15:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.868 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:11.868 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:11.868 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.868 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.868 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:11.868 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.868 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.868 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:11.868 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:11.868 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:11.868 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.868 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.868 [2024-12-06 15:45:55.116658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:11.868 [2024-12-06 15:45:55.116856] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:12.126 [2024-12-06 15:45:55.220008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.126 [2024-12-06 15:45:55.275949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:12.126 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.384 [2024-12-06 15:45:55.430782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:12.384 [2024-12-06 15:45:55.430843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.384 BaseBdev2 00:19:12.384 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.385 [ 00:19:12.385 { 00:19:12.385 "name": "BaseBdev2", 00:19:12.385 "aliases": [ 00:19:12.385 "a1639887-11b2-48db-acc1-fa8ca7a04baa" 00:19:12.385 ], 00:19:12.385 "product_name": "Malloc disk", 00:19:12.385 "block_size": 512, 00:19:12.385 "num_blocks": 65536, 00:19:12.385 "uuid": "a1639887-11b2-48db-acc1-fa8ca7a04baa", 00:19:12.385 "assigned_rate_limits": { 00:19:12.385 "rw_ios_per_sec": 0, 00:19:12.385 "rw_mbytes_per_sec": 0, 00:19:12.385 "r_mbytes_per_sec": 0, 00:19:12.385 "w_mbytes_per_sec": 0 00:19:12.385 }, 00:19:12.385 "claimed": false, 00:19:12.385 "zoned": false, 00:19:12.385 "supported_io_types": { 00:19:12.385 "read": true, 00:19:12.385 "write": true, 00:19:12.385 "unmap": true, 00:19:12.385 "flush": true, 00:19:12.385 "reset": true, 00:19:12.385 "nvme_admin": false, 00:19:12.385 "nvme_io": false, 00:19:12.385 "nvme_io_md": false, 00:19:12.385 "write_zeroes": true, 00:19:12.385 "zcopy": true, 00:19:12.385 "get_zone_info": false, 00:19:12.385 "zone_management": false, 00:19:12.385 "zone_append": false, 00:19:12.385 "compare": false, 00:19:12.385 "compare_and_write": false, 00:19:12.385 "abort": true, 00:19:12.385 "seek_hole": false, 00:19:12.385 "seek_data": false, 00:19:12.385 "copy": true, 00:19:12.385 "nvme_iov_md": false 00:19:12.385 }, 00:19:12.385 "memory_domains": [ 00:19:12.385 { 00:19:12.385 "dma_device_id": "system", 00:19:12.385 "dma_device_type": 1 00:19:12.385 }, 00:19:12.385 { 00:19:12.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.385 "dma_device_type": 2 00:19:12.385 } 00:19:12.385 ], 00:19:12.385 "driver_specific": {} 00:19:12.385 } 00:19:12.385 ] 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.385 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.643 BaseBdev3 00:19:12.643 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.643 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:12.643 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:12.643 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:12.643 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:12.643 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:12.643 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:12.643 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:12.643 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.643 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.643 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.643 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:12.643 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.643 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.643 [ 00:19:12.643 { 00:19:12.643 "name": "BaseBdev3", 00:19:12.643 "aliases": [ 00:19:12.643 "b53163a5-0b95-40c2-b87c-5b0745f2fe35" 00:19:12.643 ], 00:19:12.643 "product_name": "Malloc disk", 00:19:12.643 "block_size": 512, 00:19:12.643 "num_blocks": 65536, 00:19:12.643 "uuid": "b53163a5-0b95-40c2-b87c-5b0745f2fe35", 00:19:12.643 "assigned_rate_limits": { 00:19:12.643 "rw_ios_per_sec": 0, 00:19:12.643 "rw_mbytes_per_sec": 0, 00:19:12.643 "r_mbytes_per_sec": 0, 00:19:12.643 "w_mbytes_per_sec": 0 00:19:12.643 }, 00:19:12.643 "claimed": false, 00:19:12.643 "zoned": false, 00:19:12.643 "supported_io_types": { 00:19:12.643 "read": true, 00:19:12.643 "write": true, 00:19:12.643 "unmap": true, 00:19:12.643 "flush": true, 00:19:12.643 "reset": true, 00:19:12.643 "nvme_admin": false, 00:19:12.643 "nvme_io": false, 00:19:12.643 "nvme_io_md": false, 00:19:12.643 "write_zeroes": true, 00:19:12.643 "zcopy": true, 00:19:12.643 "get_zone_info": false, 00:19:12.643 "zone_management": false, 00:19:12.643 "zone_append": false, 00:19:12.643 "compare": false, 00:19:12.643 "compare_and_write": false, 00:19:12.643 "abort": true, 00:19:12.643 "seek_hole": false, 00:19:12.643 "seek_data": false, 00:19:12.643 "copy": true, 00:19:12.643 "nvme_iov_md": false 00:19:12.643 }, 00:19:12.643 "memory_domains": [ 00:19:12.643 { 00:19:12.643 "dma_device_id": "system", 00:19:12.643 "dma_device_type": 1 00:19:12.643 }, 00:19:12.643 { 00:19:12.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.643 "dma_device_type": 2 00:19:12.643 } 00:19:12.643 ], 00:19:12.643 "driver_specific": {} 00:19:12.643 } 00:19:12.643 ] 00:19:12.643 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.644 BaseBdev4 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.644 [ 00:19:12.644 { 00:19:12.644 "name": "BaseBdev4", 00:19:12.644 "aliases": [ 00:19:12.644 "b012f9d9-16fe-47cc-bce8-a4f648ffe8de" 00:19:12.644 ], 00:19:12.644 "product_name": "Malloc disk", 00:19:12.644 "block_size": 512, 00:19:12.644 "num_blocks": 65536, 00:19:12.644 "uuid": "b012f9d9-16fe-47cc-bce8-a4f648ffe8de", 00:19:12.644 "assigned_rate_limits": { 00:19:12.644 "rw_ios_per_sec": 0, 00:19:12.644 "rw_mbytes_per_sec": 0, 00:19:12.644 "r_mbytes_per_sec": 0, 00:19:12.644 "w_mbytes_per_sec": 0 00:19:12.644 }, 00:19:12.644 "claimed": false, 00:19:12.644 "zoned": false, 00:19:12.644 "supported_io_types": { 00:19:12.644 "read": true, 00:19:12.644 "write": true, 00:19:12.644 "unmap": true, 00:19:12.644 "flush": true, 00:19:12.644 "reset": true, 00:19:12.644 "nvme_admin": false, 00:19:12.644 "nvme_io": false, 00:19:12.644 "nvme_io_md": false, 00:19:12.644 "write_zeroes": true, 00:19:12.644 "zcopy": true, 00:19:12.644 "get_zone_info": false, 00:19:12.644 "zone_management": false, 00:19:12.644 "zone_append": false, 00:19:12.644 "compare": false, 00:19:12.644 "compare_and_write": false, 00:19:12.644 "abort": true, 00:19:12.644 "seek_hole": false, 00:19:12.644 "seek_data": false, 00:19:12.644 "copy": true, 00:19:12.644 "nvme_iov_md": false 00:19:12.644 }, 00:19:12.644 "memory_domains": [ 00:19:12.644 { 00:19:12.644 "dma_device_id": "system", 00:19:12.644 "dma_device_type": 1 00:19:12.644 }, 00:19:12.644 { 00:19:12.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.644 "dma_device_type": 2 00:19:12.644 } 00:19:12.644 ], 00:19:12.644 "driver_specific": {} 00:19:12.644 } 00:19:12.644 ] 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.644 [2024-12-06 15:45:55.865660] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:12.644 [2024-12-06 15:45:55.865713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:12.644 [2024-12-06 15:45:55.865754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:12.644 [2024-12-06 15:45:55.868112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:12.644 [2024-12-06 15:45:55.868327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.644 "name": "Existed_Raid", 00:19:12.644 "uuid": "6f9f2236-77b7-4ff9-9e1d-4ae3c7fe89ba", 00:19:12.644 "strip_size_kb": 64, 00:19:12.644 "state": "configuring", 00:19:12.644 "raid_level": "raid5f", 00:19:12.644 "superblock": true, 00:19:12.644 "num_base_bdevs": 4, 00:19:12.644 "num_base_bdevs_discovered": 3, 00:19:12.644 "num_base_bdevs_operational": 4, 00:19:12.644 "base_bdevs_list": [ 00:19:12.644 { 00:19:12.644 "name": "BaseBdev1", 00:19:12.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.644 "is_configured": false, 00:19:12.644 "data_offset": 0, 00:19:12.644 "data_size": 0 00:19:12.644 }, 00:19:12.644 { 00:19:12.644 "name": "BaseBdev2", 00:19:12.644 "uuid": "a1639887-11b2-48db-acc1-fa8ca7a04baa", 00:19:12.644 "is_configured": true, 00:19:12.644 "data_offset": 2048, 00:19:12.644 "data_size": 63488 00:19:12.644 }, 00:19:12.644 { 00:19:12.644 "name": "BaseBdev3", 00:19:12.644 "uuid": "b53163a5-0b95-40c2-b87c-5b0745f2fe35", 00:19:12.644 "is_configured": true, 00:19:12.644 "data_offset": 2048, 00:19:12.644 "data_size": 63488 00:19:12.644 }, 00:19:12.644 { 00:19:12.644 "name": "BaseBdev4", 00:19:12.644 "uuid": "b012f9d9-16fe-47cc-bce8-a4f648ffe8de", 00:19:12.644 "is_configured": true, 00:19:12.644 "data_offset": 2048, 00:19:12.644 "data_size": 63488 00:19:12.644 } 00:19:12.644 ] 00:19:12.644 }' 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.644 15:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.211 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.212 [2024-12-06 15:45:56.273048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.212 "name": "Existed_Raid", 00:19:13.212 "uuid": "6f9f2236-77b7-4ff9-9e1d-4ae3c7fe89ba", 00:19:13.212 "strip_size_kb": 64, 00:19:13.212 "state": "configuring", 00:19:13.212 "raid_level": "raid5f", 00:19:13.212 "superblock": true, 00:19:13.212 "num_base_bdevs": 4, 00:19:13.212 "num_base_bdevs_discovered": 2, 00:19:13.212 "num_base_bdevs_operational": 4, 00:19:13.212 "base_bdevs_list": [ 00:19:13.212 { 00:19:13.212 "name": "BaseBdev1", 00:19:13.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.212 "is_configured": false, 00:19:13.212 "data_offset": 0, 00:19:13.212 "data_size": 0 00:19:13.212 }, 00:19:13.212 { 00:19:13.212 "name": null, 00:19:13.212 "uuid": "a1639887-11b2-48db-acc1-fa8ca7a04baa", 00:19:13.212 "is_configured": false, 00:19:13.212 "data_offset": 0, 00:19:13.212 "data_size": 63488 00:19:13.212 }, 00:19:13.212 { 00:19:13.212 "name": "BaseBdev3", 00:19:13.212 "uuid": "b53163a5-0b95-40c2-b87c-5b0745f2fe35", 00:19:13.212 "is_configured": true, 00:19:13.212 "data_offset": 2048, 00:19:13.212 "data_size": 63488 00:19:13.212 }, 00:19:13.212 { 00:19:13.212 "name": "BaseBdev4", 00:19:13.212 "uuid": "b012f9d9-16fe-47cc-bce8-a4f648ffe8de", 00:19:13.212 "is_configured": true, 00:19:13.212 "data_offset": 2048, 00:19:13.212 "data_size": 63488 00:19:13.212 } 00:19:13.212 ] 00:19:13.212 }' 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.212 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.470 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:13.470 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.470 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.470 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.470 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.471 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:13.471 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:13.471 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.471 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.729 [2024-12-06 15:45:56.776153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:13.729 BaseBdev1 00:19:13.729 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.729 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:13.729 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:13.729 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:13.729 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:13.729 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:13.729 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:13.729 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:13.729 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.729 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.729 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.729 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:13.729 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.729 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.729 [ 00:19:13.729 { 00:19:13.729 "name": "BaseBdev1", 00:19:13.729 "aliases": [ 00:19:13.729 "71415499-25e6-451c-b8cf-2a86bddcbc8f" 00:19:13.729 ], 00:19:13.729 "product_name": "Malloc disk", 00:19:13.729 "block_size": 512, 00:19:13.729 "num_blocks": 65536, 00:19:13.729 "uuid": "71415499-25e6-451c-b8cf-2a86bddcbc8f", 00:19:13.729 "assigned_rate_limits": { 00:19:13.729 "rw_ios_per_sec": 0, 00:19:13.729 "rw_mbytes_per_sec": 0, 00:19:13.729 "r_mbytes_per_sec": 0, 00:19:13.729 "w_mbytes_per_sec": 0 00:19:13.729 }, 00:19:13.729 "claimed": true, 00:19:13.729 "claim_type": "exclusive_write", 00:19:13.729 "zoned": false, 00:19:13.729 "supported_io_types": { 00:19:13.729 "read": true, 00:19:13.729 "write": true, 00:19:13.729 "unmap": true, 00:19:13.729 "flush": true, 00:19:13.729 "reset": true, 00:19:13.729 "nvme_admin": false, 00:19:13.729 "nvme_io": false, 00:19:13.729 "nvme_io_md": false, 00:19:13.729 "write_zeroes": true, 00:19:13.729 "zcopy": true, 00:19:13.729 "get_zone_info": false, 00:19:13.729 "zone_management": false, 00:19:13.729 "zone_append": false, 00:19:13.729 "compare": false, 00:19:13.729 "compare_and_write": false, 00:19:13.729 "abort": true, 00:19:13.729 "seek_hole": false, 00:19:13.729 "seek_data": false, 00:19:13.729 "copy": true, 00:19:13.729 "nvme_iov_md": false 00:19:13.729 }, 00:19:13.729 "memory_domains": [ 00:19:13.729 { 00:19:13.729 "dma_device_id": "system", 00:19:13.729 "dma_device_type": 1 00:19:13.729 }, 00:19:13.729 { 00:19:13.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.729 "dma_device_type": 2 00:19:13.729 } 00:19:13.729 ], 00:19:13.729 "driver_specific": {} 00:19:13.729 } 00:19:13.729 ] 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.730 "name": "Existed_Raid", 00:19:13.730 "uuid": "6f9f2236-77b7-4ff9-9e1d-4ae3c7fe89ba", 00:19:13.730 "strip_size_kb": 64, 00:19:13.730 "state": "configuring", 00:19:13.730 "raid_level": "raid5f", 00:19:13.730 "superblock": true, 00:19:13.730 "num_base_bdevs": 4, 00:19:13.730 "num_base_bdevs_discovered": 3, 00:19:13.730 "num_base_bdevs_operational": 4, 00:19:13.730 "base_bdevs_list": [ 00:19:13.730 { 00:19:13.730 "name": "BaseBdev1", 00:19:13.730 "uuid": "71415499-25e6-451c-b8cf-2a86bddcbc8f", 00:19:13.730 "is_configured": true, 00:19:13.730 "data_offset": 2048, 00:19:13.730 "data_size": 63488 00:19:13.730 }, 00:19:13.730 { 00:19:13.730 "name": null, 00:19:13.730 "uuid": "a1639887-11b2-48db-acc1-fa8ca7a04baa", 00:19:13.730 "is_configured": false, 00:19:13.730 "data_offset": 0, 00:19:13.730 "data_size": 63488 00:19:13.730 }, 00:19:13.730 { 00:19:13.730 "name": "BaseBdev3", 00:19:13.730 "uuid": "b53163a5-0b95-40c2-b87c-5b0745f2fe35", 00:19:13.730 "is_configured": true, 00:19:13.730 "data_offset": 2048, 00:19:13.730 "data_size": 63488 00:19:13.730 }, 00:19:13.730 { 00:19:13.730 "name": "BaseBdev4", 00:19:13.730 "uuid": "b012f9d9-16fe-47cc-bce8-a4f648ffe8de", 00:19:13.730 "is_configured": true, 00:19:13.730 "data_offset": 2048, 00:19:13.730 "data_size": 63488 00:19:13.730 } 00:19:13.730 ] 00:19:13.730 }' 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.730 15:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.989 [2024-12-06 15:45:57.263629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.989 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.253 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.253 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.253 "name": "Existed_Raid", 00:19:14.253 "uuid": "6f9f2236-77b7-4ff9-9e1d-4ae3c7fe89ba", 00:19:14.253 "strip_size_kb": 64, 00:19:14.253 "state": "configuring", 00:19:14.253 "raid_level": "raid5f", 00:19:14.253 "superblock": true, 00:19:14.253 "num_base_bdevs": 4, 00:19:14.253 "num_base_bdevs_discovered": 2, 00:19:14.253 "num_base_bdevs_operational": 4, 00:19:14.253 "base_bdevs_list": [ 00:19:14.253 { 00:19:14.253 "name": "BaseBdev1", 00:19:14.253 "uuid": "71415499-25e6-451c-b8cf-2a86bddcbc8f", 00:19:14.253 "is_configured": true, 00:19:14.253 "data_offset": 2048, 00:19:14.253 "data_size": 63488 00:19:14.253 }, 00:19:14.253 { 00:19:14.253 "name": null, 00:19:14.253 "uuid": "a1639887-11b2-48db-acc1-fa8ca7a04baa", 00:19:14.253 "is_configured": false, 00:19:14.253 "data_offset": 0, 00:19:14.253 "data_size": 63488 00:19:14.253 }, 00:19:14.253 { 00:19:14.253 "name": null, 00:19:14.253 "uuid": "b53163a5-0b95-40c2-b87c-5b0745f2fe35", 00:19:14.253 "is_configured": false, 00:19:14.253 "data_offset": 0, 00:19:14.253 "data_size": 63488 00:19:14.253 }, 00:19:14.253 { 00:19:14.253 "name": "BaseBdev4", 00:19:14.253 "uuid": "b012f9d9-16fe-47cc-bce8-a4f648ffe8de", 00:19:14.253 "is_configured": true, 00:19:14.253 "data_offset": 2048, 00:19:14.253 "data_size": 63488 00:19:14.253 } 00:19:14.253 ] 00:19:14.253 }' 00:19:14.253 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.253 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.512 [2024-12-06 15:45:57.702970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.512 "name": "Existed_Raid", 00:19:14.512 "uuid": "6f9f2236-77b7-4ff9-9e1d-4ae3c7fe89ba", 00:19:14.512 "strip_size_kb": 64, 00:19:14.512 "state": "configuring", 00:19:14.512 "raid_level": "raid5f", 00:19:14.512 "superblock": true, 00:19:14.512 "num_base_bdevs": 4, 00:19:14.512 "num_base_bdevs_discovered": 3, 00:19:14.512 "num_base_bdevs_operational": 4, 00:19:14.512 "base_bdevs_list": [ 00:19:14.512 { 00:19:14.512 "name": "BaseBdev1", 00:19:14.512 "uuid": "71415499-25e6-451c-b8cf-2a86bddcbc8f", 00:19:14.512 "is_configured": true, 00:19:14.512 "data_offset": 2048, 00:19:14.512 "data_size": 63488 00:19:14.512 }, 00:19:14.512 { 00:19:14.512 "name": null, 00:19:14.512 "uuid": "a1639887-11b2-48db-acc1-fa8ca7a04baa", 00:19:14.512 "is_configured": false, 00:19:14.512 "data_offset": 0, 00:19:14.512 "data_size": 63488 00:19:14.512 }, 00:19:14.512 { 00:19:14.512 "name": "BaseBdev3", 00:19:14.512 "uuid": "b53163a5-0b95-40c2-b87c-5b0745f2fe35", 00:19:14.512 "is_configured": true, 00:19:14.512 "data_offset": 2048, 00:19:14.512 "data_size": 63488 00:19:14.512 }, 00:19:14.512 { 00:19:14.512 "name": "BaseBdev4", 00:19:14.512 "uuid": "b012f9d9-16fe-47cc-bce8-a4f648ffe8de", 00:19:14.512 "is_configured": true, 00:19:14.512 "data_offset": 2048, 00:19:14.512 "data_size": 63488 00:19:14.512 } 00:19:14.512 ] 00:19:14.512 }' 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.512 15:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.081 [2024-12-06 15:45:58.190704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.081 "name": "Existed_Raid", 00:19:15.081 "uuid": "6f9f2236-77b7-4ff9-9e1d-4ae3c7fe89ba", 00:19:15.081 "strip_size_kb": 64, 00:19:15.081 "state": "configuring", 00:19:15.081 "raid_level": "raid5f", 00:19:15.081 "superblock": true, 00:19:15.081 "num_base_bdevs": 4, 00:19:15.081 "num_base_bdevs_discovered": 2, 00:19:15.081 "num_base_bdevs_operational": 4, 00:19:15.081 "base_bdevs_list": [ 00:19:15.081 { 00:19:15.081 "name": null, 00:19:15.081 "uuid": "71415499-25e6-451c-b8cf-2a86bddcbc8f", 00:19:15.081 "is_configured": false, 00:19:15.081 "data_offset": 0, 00:19:15.081 "data_size": 63488 00:19:15.081 }, 00:19:15.081 { 00:19:15.081 "name": null, 00:19:15.081 "uuid": "a1639887-11b2-48db-acc1-fa8ca7a04baa", 00:19:15.081 "is_configured": false, 00:19:15.081 "data_offset": 0, 00:19:15.081 "data_size": 63488 00:19:15.081 }, 00:19:15.081 { 00:19:15.081 "name": "BaseBdev3", 00:19:15.081 "uuid": "b53163a5-0b95-40c2-b87c-5b0745f2fe35", 00:19:15.081 "is_configured": true, 00:19:15.081 "data_offset": 2048, 00:19:15.081 "data_size": 63488 00:19:15.081 }, 00:19:15.081 { 00:19:15.081 "name": "BaseBdev4", 00:19:15.081 "uuid": "b012f9d9-16fe-47cc-bce8-a4f648ffe8de", 00:19:15.081 "is_configured": true, 00:19:15.081 "data_offset": 2048, 00:19:15.081 "data_size": 63488 00:19:15.081 } 00:19:15.081 ] 00:19:15.081 }' 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.081 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.652 [2024-12-06 15:45:58.791441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.652 "name": "Existed_Raid", 00:19:15.652 "uuid": "6f9f2236-77b7-4ff9-9e1d-4ae3c7fe89ba", 00:19:15.652 "strip_size_kb": 64, 00:19:15.652 "state": "configuring", 00:19:15.652 "raid_level": "raid5f", 00:19:15.652 "superblock": true, 00:19:15.652 "num_base_bdevs": 4, 00:19:15.652 "num_base_bdevs_discovered": 3, 00:19:15.652 "num_base_bdevs_operational": 4, 00:19:15.652 "base_bdevs_list": [ 00:19:15.652 { 00:19:15.652 "name": null, 00:19:15.652 "uuid": "71415499-25e6-451c-b8cf-2a86bddcbc8f", 00:19:15.652 "is_configured": false, 00:19:15.652 "data_offset": 0, 00:19:15.652 "data_size": 63488 00:19:15.652 }, 00:19:15.652 { 00:19:15.652 "name": "BaseBdev2", 00:19:15.652 "uuid": "a1639887-11b2-48db-acc1-fa8ca7a04baa", 00:19:15.652 "is_configured": true, 00:19:15.652 "data_offset": 2048, 00:19:15.652 "data_size": 63488 00:19:15.652 }, 00:19:15.652 { 00:19:15.652 "name": "BaseBdev3", 00:19:15.652 "uuid": "b53163a5-0b95-40c2-b87c-5b0745f2fe35", 00:19:15.652 "is_configured": true, 00:19:15.652 "data_offset": 2048, 00:19:15.652 "data_size": 63488 00:19:15.652 }, 00:19:15.652 { 00:19:15.652 "name": "BaseBdev4", 00:19:15.652 "uuid": "b012f9d9-16fe-47cc-bce8-a4f648ffe8de", 00:19:15.652 "is_configured": true, 00:19:15.652 "data_offset": 2048, 00:19:15.652 "data_size": 63488 00:19:15.652 } 00:19:15.652 ] 00:19:15.652 }' 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.652 15:45:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 71415499-25e6-451c-b8cf-2a86bddcbc8f 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.223 [2024-12-06 15:45:59.346384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:16.223 [2024-12-06 15:45:59.346685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:16.223 [2024-12-06 15:45:59.346701] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:16.223 [2024-12-06 15:45:59.347004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:16.223 NewBaseBdev 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.223 [2024-12-06 15:45:59.354269] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:16.223 [2024-12-06 15:45:59.354297] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:16.223 [2024-12-06 15:45:59.354596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.223 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.223 [ 00:19:16.223 { 00:19:16.223 "name": "NewBaseBdev", 00:19:16.224 "aliases": [ 00:19:16.224 "71415499-25e6-451c-b8cf-2a86bddcbc8f" 00:19:16.224 ], 00:19:16.224 "product_name": "Malloc disk", 00:19:16.224 "block_size": 512, 00:19:16.224 "num_blocks": 65536, 00:19:16.224 "uuid": "71415499-25e6-451c-b8cf-2a86bddcbc8f", 00:19:16.224 "assigned_rate_limits": { 00:19:16.224 "rw_ios_per_sec": 0, 00:19:16.224 "rw_mbytes_per_sec": 0, 00:19:16.224 "r_mbytes_per_sec": 0, 00:19:16.224 "w_mbytes_per_sec": 0 00:19:16.224 }, 00:19:16.224 "claimed": true, 00:19:16.224 "claim_type": "exclusive_write", 00:19:16.224 "zoned": false, 00:19:16.224 "supported_io_types": { 00:19:16.224 "read": true, 00:19:16.224 "write": true, 00:19:16.224 "unmap": true, 00:19:16.224 "flush": true, 00:19:16.224 "reset": true, 00:19:16.224 "nvme_admin": false, 00:19:16.224 "nvme_io": false, 00:19:16.224 "nvme_io_md": false, 00:19:16.224 "write_zeroes": true, 00:19:16.224 "zcopy": true, 00:19:16.224 "get_zone_info": false, 00:19:16.224 "zone_management": false, 00:19:16.224 "zone_append": false, 00:19:16.224 "compare": false, 00:19:16.224 "compare_and_write": false, 00:19:16.224 "abort": true, 00:19:16.224 "seek_hole": false, 00:19:16.224 "seek_data": false, 00:19:16.224 "copy": true, 00:19:16.224 "nvme_iov_md": false 00:19:16.224 }, 00:19:16.224 "memory_domains": [ 00:19:16.224 { 00:19:16.224 "dma_device_id": "system", 00:19:16.224 "dma_device_type": 1 00:19:16.224 }, 00:19:16.224 { 00:19:16.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.224 "dma_device_type": 2 00:19:16.224 } 00:19:16.224 ], 00:19:16.224 "driver_specific": {} 00:19:16.224 } 00:19:16.224 ] 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.224 "name": "Existed_Raid", 00:19:16.224 "uuid": "6f9f2236-77b7-4ff9-9e1d-4ae3c7fe89ba", 00:19:16.224 "strip_size_kb": 64, 00:19:16.224 "state": "online", 00:19:16.224 "raid_level": "raid5f", 00:19:16.224 "superblock": true, 00:19:16.224 "num_base_bdevs": 4, 00:19:16.224 "num_base_bdevs_discovered": 4, 00:19:16.224 "num_base_bdevs_operational": 4, 00:19:16.224 "base_bdevs_list": [ 00:19:16.224 { 00:19:16.224 "name": "NewBaseBdev", 00:19:16.224 "uuid": "71415499-25e6-451c-b8cf-2a86bddcbc8f", 00:19:16.224 "is_configured": true, 00:19:16.224 "data_offset": 2048, 00:19:16.224 "data_size": 63488 00:19:16.224 }, 00:19:16.224 { 00:19:16.224 "name": "BaseBdev2", 00:19:16.224 "uuid": "a1639887-11b2-48db-acc1-fa8ca7a04baa", 00:19:16.224 "is_configured": true, 00:19:16.224 "data_offset": 2048, 00:19:16.224 "data_size": 63488 00:19:16.224 }, 00:19:16.224 { 00:19:16.224 "name": "BaseBdev3", 00:19:16.224 "uuid": "b53163a5-0b95-40c2-b87c-5b0745f2fe35", 00:19:16.224 "is_configured": true, 00:19:16.224 "data_offset": 2048, 00:19:16.224 "data_size": 63488 00:19:16.224 }, 00:19:16.224 { 00:19:16.224 "name": "BaseBdev4", 00:19:16.224 "uuid": "b012f9d9-16fe-47cc-bce8-a4f648ffe8de", 00:19:16.224 "is_configured": true, 00:19:16.224 "data_offset": 2048, 00:19:16.224 "data_size": 63488 00:19:16.224 } 00:19:16.224 ] 00:19:16.224 }' 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.224 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.793 [2024-12-06 15:45:59.799650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:16.793 "name": "Existed_Raid", 00:19:16.793 "aliases": [ 00:19:16.793 "6f9f2236-77b7-4ff9-9e1d-4ae3c7fe89ba" 00:19:16.793 ], 00:19:16.793 "product_name": "Raid Volume", 00:19:16.793 "block_size": 512, 00:19:16.793 "num_blocks": 190464, 00:19:16.793 "uuid": "6f9f2236-77b7-4ff9-9e1d-4ae3c7fe89ba", 00:19:16.793 "assigned_rate_limits": { 00:19:16.793 "rw_ios_per_sec": 0, 00:19:16.793 "rw_mbytes_per_sec": 0, 00:19:16.793 "r_mbytes_per_sec": 0, 00:19:16.793 "w_mbytes_per_sec": 0 00:19:16.793 }, 00:19:16.793 "claimed": false, 00:19:16.793 "zoned": false, 00:19:16.793 "supported_io_types": { 00:19:16.793 "read": true, 00:19:16.793 "write": true, 00:19:16.793 "unmap": false, 00:19:16.793 "flush": false, 00:19:16.793 "reset": true, 00:19:16.793 "nvme_admin": false, 00:19:16.793 "nvme_io": false, 00:19:16.793 "nvme_io_md": false, 00:19:16.793 "write_zeroes": true, 00:19:16.793 "zcopy": false, 00:19:16.793 "get_zone_info": false, 00:19:16.793 "zone_management": false, 00:19:16.793 "zone_append": false, 00:19:16.793 "compare": false, 00:19:16.793 "compare_and_write": false, 00:19:16.793 "abort": false, 00:19:16.793 "seek_hole": false, 00:19:16.793 "seek_data": false, 00:19:16.793 "copy": false, 00:19:16.793 "nvme_iov_md": false 00:19:16.793 }, 00:19:16.793 "driver_specific": { 00:19:16.793 "raid": { 00:19:16.793 "uuid": "6f9f2236-77b7-4ff9-9e1d-4ae3c7fe89ba", 00:19:16.793 "strip_size_kb": 64, 00:19:16.793 "state": "online", 00:19:16.793 "raid_level": "raid5f", 00:19:16.793 "superblock": true, 00:19:16.793 "num_base_bdevs": 4, 00:19:16.793 "num_base_bdevs_discovered": 4, 00:19:16.793 "num_base_bdevs_operational": 4, 00:19:16.793 "base_bdevs_list": [ 00:19:16.793 { 00:19:16.793 "name": "NewBaseBdev", 00:19:16.793 "uuid": "71415499-25e6-451c-b8cf-2a86bddcbc8f", 00:19:16.793 "is_configured": true, 00:19:16.793 "data_offset": 2048, 00:19:16.793 "data_size": 63488 00:19:16.793 }, 00:19:16.793 { 00:19:16.793 "name": "BaseBdev2", 00:19:16.793 "uuid": "a1639887-11b2-48db-acc1-fa8ca7a04baa", 00:19:16.793 "is_configured": true, 00:19:16.793 "data_offset": 2048, 00:19:16.793 "data_size": 63488 00:19:16.793 }, 00:19:16.793 { 00:19:16.793 "name": "BaseBdev3", 00:19:16.793 "uuid": "b53163a5-0b95-40c2-b87c-5b0745f2fe35", 00:19:16.793 "is_configured": true, 00:19:16.793 "data_offset": 2048, 00:19:16.793 "data_size": 63488 00:19:16.793 }, 00:19:16.793 { 00:19:16.793 "name": "BaseBdev4", 00:19:16.793 "uuid": "b012f9d9-16fe-47cc-bce8-a4f648ffe8de", 00:19:16.793 "is_configured": true, 00:19:16.793 "data_offset": 2048, 00:19:16.793 "data_size": 63488 00:19:16.793 } 00:19:16.793 ] 00:19:16.793 } 00:19:16.793 } 00:19:16.793 }' 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:16.793 BaseBdev2 00:19:16.793 BaseBdev3 00:19:16.793 BaseBdev4' 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.793 15:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.793 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.793 15:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:16.793 15:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:16.793 15:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:16.793 15:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:16.793 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.793 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.793 15:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.793 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.793 15:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:16.793 15:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:16.793 15:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:16.793 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.793 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.053 [2024-12-06 15:46:00.082980] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:17.053 [2024-12-06 15:46:00.083014] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:17.053 [2024-12-06 15:46:00.083110] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.053 [2024-12-06 15:46:00.083454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:17.053 [2024-12-06 15:46:00.083467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:17.053 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.053 15:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83455 00:19:17.053 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83455 ']' 00:19:17.053 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83455 00:19:17.053 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:17.053 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.053 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83455 00:19:17.053 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.053 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.053 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83455' 00:19:17.053 killing process with pid 83455 00:19:17.053 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83455 00:19:17.053 [2024-12-06 15:46:00.130801] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:17.053 15:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83455 00:19:17.312 [2024-12-06 15:46:00.569390] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:18.689 15:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:18.689 00:19:18.689 real 0m11.387s 00:19:18.689 user 0m17.686s 00:19:18.689 sys 0m2.502s 00:19:18.689 15:46:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.689 15:46:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.689 ************************************ 00:19:18.689 END TEST raid5f_state_function_test_sb 00:19:18.689 ************************************ 00:19:18.689 15:46:01 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:19:18.689 15:46:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:18.689 15:46:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.689 15:46:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:18.689 ************************************ 00:19:18.689 START TEST raid5f_superblock_test 00:19:18.689 ************************************ 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84120 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84120 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84120 ']' 00:19:18.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.689 15:46:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.947 [2024-12-06 15:46:02.004602] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:19:18.947 [2024-12-06 15:46:02.005853] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84120 ] 00:19:18.947 [2024-12-06 15:46:02.205733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.205 [2024-12-06 15:46:02.351622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.464 [2024-12-06 15:46:02.597138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.464 [2024-12-06 15:46:02.597180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.723 malloc1 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.723 [2024-12-06 15:46:02.897327] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:19.723 [2024-12-06 15:46:02.897539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.723 [2024-12-06 15:46:02.897607] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:19.723 [2024-12-06 15:46:02.897696] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.723 [2024-12-06 15:46:02.900454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.723 [2024-12-06 15:46:02.900613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:19.723 pt1 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.723 malloc2 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.723 [2024-12-06 15:46:02.960267] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:19.723 [2024-12-06 15:46:02.960454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.723 [2024-12-06 15:46:02.960537] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:19.723 [2024-12-06 15:46:02.960627] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.723 [2024-12-06 15:46:02.963481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.723 [2024-12-06 15:46:02.963644] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:19.723 pt2 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.723 15:46:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.988 malloc3 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.988 [2024-12-06 15:46:03.035129] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:19.988 [2024-12-06 15:46:03.035295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.988 [2024-12-06 15:46:03.035357] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:19.988 [2024-12-06 15:46:03.035472] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.988 [2024-12-06 15:46:03.038225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.988 [2024-12-06 15:46:03.038364] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:19.988 pt3 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.988 malloc4 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.988 [2024-12-06 15:46:03.097083] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:19.988 [2024-12-06 15:46:03.097146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.988 [2024-12-06 15:46:03.097174] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:19.988 [2024-12-06 15:46:03.097186] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.988 [2024-12-06 15:46:03.099892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.988 [2024-12-06 15:46:03.100030] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:19.988 pt4 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.988 [2024-12-06 15:46:03.109107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:19.988 [2024-12-06 15:46:03.111450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:19.988 [2024-12-06 15:46:03.111674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:19.988 [2024-12-06 15:46:03.111731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:19.988 [2024-12-06 15:46:03.111942] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:19.988 [2024-12-06 15:46:03.111960] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:19.988 [2024-12-06 15:46:03.112237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:19.988 [2024-12-06 15:46:03.119865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:19.988 [2024-12-06 15:46:03.119987] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:19.988 [2024-12-06 15:46:03.120281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.988 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.988 "name": "raid_bdev1", 00:19:19.988 "uuid": "78c3dcd4-6a86-4808-aa80-4345f48856b8", 00:19:19.988 "strip_size_kb": 64, 00:19:19.988 "state": "online", 00:19:19.988 "raid_level": "raid5f", 00:19:19.988 "superblock": true, 00:19:19.988 "num_base_bdevs": 4, 00:19:19.988 "num_base_bdevs_discovered": 4, 00:19:19.988 "num_base_bdevs_operational": 4, 00:19:19.988 "base_bdevs_list": [ 00:19:19.988 { 00:19:19.988 "name": "pt1", 00:19:19.988 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:19.988 "is_configured": true, 00:19:19.988 "data_offset": 2048, 00:19:19.988 "data_size": 63488 00:19:19.988 }, 00:19:19.988 { 00:19:19.988 "name": "pt2", 00:19:19.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.989 "is_configured": true, 00:19:19.989 "data_offset": 2048, 00:19:19.989 "data_size": 63488 00:19:19.989 }, 00:19:19.989 { 00:19:19.989 "name": "pt3", 00:19:19.989 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:19.989 "is_configured": true, 00:19:19.989 "data_offset": 2048, 00:19:19.989 "data_size": 63488 00:19:19.989 }, 00:19:19.989 { 00:19:19.989 "name": "pt4", 00:19:19.989 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:19.989 "is_configured": true, 00:19:19.989 "data_offset": 2048, 00:19:19.989 "data_size": 63488 00:19:19.989 } 00:19:19.989 ] 00:19:19.989 }' 00:19:19.989 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.989 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.247 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:20.247 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:20.247 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:20.247 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:20.247 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:20.247 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:20.247 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:20.247 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.247 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:20.247 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.505 [2024-12-06 15:46:03.541928] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.505 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.505 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:20.505 "name": "raid_bdev1", 00:19:20.505 "aliases": [ 00:19:20.505 "78c3dcd4-6a86-4808-aa80-4345f48856b8" 00:19:20.505 ], 00:19:20.505 "product_name": "Raid Volume", 00:19:20.505 "block_size": 512, 00:19:20.505 "num_blocks": 190464, 00:19:20.505 "uuid": "78c3dcd4-6a86-4808-aa80-4345f48856b8", 00:19:20.505 "assigned_rate_limits": { 00:19:20.505 "rw_ios_per_sec": 0, 00:19:20.505 "rw_mbytes_per_sec": 0, 00:19:20.505 "r_mbytes_per_sec": 0, 00:19:20.505 "w_mbytes_per_sec": 0 00:19:20.505 }, 00:19:20.505 "claimed": false, 00:19:20.505 "zoned": false, 00:19:20.505 "supported_io_types": { 00:19:20.505 "read": true, 00:19:20.505 "write": true, 00:19:20.505 "unmap": false, 00:19:20.505 "flush": false, 00:19:20.505 "reset": true, 00:19:20.505 "nvme_admin": false, 00:19:20.505 "nvme_io": false, 00:19:20.505 "nvme_io_md": false, 00:19:20.505 "write_zeroes": true, 00:19:20.505 "zcopy": false, 00:19:20.505 "get_zone_info": false, 00:19:20.505 "zone_management": false, 00:19:20.505 "zone_append": false, 00:19:20.505 "compare": false, 00:19:20.505 "compare_and_write": false, 00:19:20.505 "abort": false, 00:19:20.505 "seek_hole": false, 00:19:20.505 "seek_data": false, 00:19:20.505 "copy": false, 00:19:20.505 "nvme_iov_md": false 00:19:20.505 }, 00:19:20.505 "driver_specific": { 00:19:20.505 "raid": { 00:19:20.505 "uuid": "78c3dcd4-6a86-4808-aa80-4345f48856b8", 00:19:20.505 "strip_size_kb": 64, 00:19:20.505 "state": "online", 00:19:20.505 "raid_level": "raid5f", 00:19:20.505 "superblock": true, 00:19:20.505 "num_base_bdevs": 4, 00:19:20.505 "num_base_bdevs_discovered": 4, 00:19:20.505 "num_base_bdevs_operational": 4, 00:19:20.505 "base_bdevs_list": [ 00:19:20.505 { 00:19:20.505 "name": "pt1", 00:19:20.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:20.505 "is_configured": true, 00:19:20.505 "data_offset": 2048, 00:19:20.505 "data_size": 63488 00:19:20.505 }, 00:19:20.505 { 00:19:20.505 "name": "pt2", 00:19:20.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.505 "is_configured": true, 00:19:20.506 "data_offset": 2048, 00:19:20.506 "data_size": 63488 00:19:20.506 }, 00:19:20.506 { 00:19:20.506 "name": "pt3", 00:19:20.506 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:20.506 "is_configured": true, 00:19:20.506 "data_offset": 2048, 00:19:20.506 "data_size": 63488 00:19:20.506 }, 00:19:20.506 { 00:19:20.506 "name": "pt4", 00:19:20.506 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:20.506 "is_configured": true, 00:19:20.506 "data_offset": 2048, 00:19:20.506 "data_size": 63488 00:19:20.506 } 00:19:20.506 ] 00:19:20.506 } 00:19:20.506 } 00:19:20.506 }' 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:20.506 pt2 00:19:20.506 pt3 00:19:20.506 pt4' 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.506 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:20.765 [2024-12-06 15:46:03.849818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=78c3dcd4-6a86-4808-aa80-4345f48856b8 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 78c3dcd4-6a86-4808-aa80-4345f48856b8 ']' 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.765 [2024-12-06 15:46:03.889665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.765 [2024-12-06 15:46:03.889806] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.765 [2024-12-06 15:46:03.889945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.765 [2024-12-06 15:46:03.890058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.765 [2024-12-06 15:46:03.890078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.765 15:46:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.765 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.765 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:20.765 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:20.765 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:20.765 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:20.765 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:20.765 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.765 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:20.765 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.765 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.024 [2024-12-06 15:46:04.065722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:21.024 [2024-12-06 15:46:04.068289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:21.024 [2024-12-06 15:46:04.068349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:21.024 [2024-12-06 15:46:04.068386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:21.024 [2024-12-06 15:46:04.068445] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:21.024 [2024-12-06 15:46:04.068527] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:21.024 [2024-12-06 15:46:04.068552] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:21.024 [2024-12-06 15:46:04.068577] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:21.024 [2024-12-06 15:46:04.068594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:21.024 [2024-12-06 15:46:04.068609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:21.024 request: 00:19:21.024 { 00:19:21.024 "name": "raid_bdev1", 00:19:21.024 "raid_level": "raid5f", 00:19:21.024 "base_bdevs": [ 00:19:21.024 "malloc1", 00:19:21.024 "malloc2", 00:19:21.024 "malloc3", 00:19:21.024 "malloc4" 00:19:21.024 ], 00:19:21.024 "strip_size_kb": 64, 00:19:21.024 "superblock": false, 00:19:21.024 "method": "bdev_raid_create", 00:19:21.024 "req_id": 1 00:19:21.024 } 00:19:21.024 Got JSON-RPC error response 00:19:21.024 response: 00:19:21.024 { 00:19:21.024 "code": -17, 00:19:21.024 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:21.024 } 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.024 [2024-12-06 15:46:04.133660] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:21.024 [2024-12-06 15:46:04.133738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.024 [2024-12-06 15:46:04.133765] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:21.024 [2024-12-06 15:46:04.133780] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.024 [2024-12-06 15:46:04.136738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.024 [2024-12-06 15:46:04.136782] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:21.024 [2024-12-06 15:46:04.136886] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:21.024 [2024-12-06 15:46:04.136953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:21.024 pt1 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.024 "name": "raid_bdev1", 00:19:21.024 "uuid": "78c3dcd4-6a86-4808-aa80-4345f48856b8", 00:19:21.024 "strip_size_kb": 64, 00:19:21.024 "state": "configuring", 00:19:21.024 "raid_level": "raid5f", 00:19:21.024 "superblock": true, 00:19:21.024 "num_base_bdevs": 4, 00:19:21.024 "num_base_bdevs_discovered": 1, 00:19:21.024 "num_base_bdevs_operational": 4, 00:19:21.024 "base_bdevs_list": [ 00:19:21.024 { 00:19:21.024 "name": "pt1", 00:19:21.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:21.024 "is_configured": true, 00:19:21.024 "data_offset": 2048, 00:19:21.024 "data_size": 63488 00:19:21.024 }, 00:19:21.024 { 00:19:21.024 "name": null, 00:19:21.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.024 "is_configured": false, 00:19:21.024 "data_offset": 2048, 00:19:21.024 "data_size": 63488 00:19:21.024 }, 00:19:21.024 { 00:19:21.024 "name": null, 00:19:21.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:21.024 "is_configured": false, 00:19:21.024 "data_offset": 2048, 00:19:21.024 "data_size": 63488 00:19:21.024 }, 00:19:21.024 { 00:19:21.024 "name": null, 00:19:21.024 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:21.024 "is_configured": false, 00:19:21.024 "data_offset": 2048, 00:19:21.024 "data_size": 63488 00:19:21.024 } 00:19:21.024 ] 00:19:21.024 }' 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.024 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.282 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:21.282 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:21.282 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.282 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.282 [2024-12-06 15:46:04.549708] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:21.282 [2024-12-06 15:46:04.549952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.282 [2024-12-06 15:46:04.550018] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:21.282 [2024-12-06 15:46:04.550109] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.282 [2024-12-06 15:46:04.550751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.282 [2024-12-06 15:46:04.550897] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:21.282 [2024-12-06 15:46:04.551097] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:21.282 [2024-12-06 15:46:04.551214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:21.282 pt2 00:19:21.282 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.282 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:21.282 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.282 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.282 [2024-12-06 15:46:04.561675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:21.282 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.282 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:21.282 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.283 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:21.283 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.283 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.283 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:21.283 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.283 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.283 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.283 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.283 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.283 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.283 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.540 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.540 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.540 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.540 "name": "raid_bdev1", 00:19:21.540 "uuid": "78c3dcd4-6a86-4808-aa80-4345f48856b8", 00:19:21.540 "strip_size_kb": 64, 00:19:21.540 "state": "configuring", 00:19:21.540 "raid_level": "raid5f", 00:19:21.541 "superblock": true, 00:19:21.541 "num_base_bdevs": 4, 00:19:21.541 "num_base_bdevs_discovered": 1, 00:19:21.541 "num_base_bdevs_operational": 4, 00:19:21.541 "base_bdevs_list": [ 00:19:21.541 { 00:19:21.541 "name": "pt1", 00:19:21.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:21.541 "is_configured": true, 00:19:21.541 "data_offset": 2048, 00:19:21.541 "data_size": 63488 00:19:21.541 }, 00:19:21.541 { 00:19:21.541 "name": null, 00:19:21.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.541 "is_configured": false, 00:19:21.541 "data_offset": 0, 00:19:21.541 "data_size": 63488 00:19:21.541 }, 00:19:21.541 { 00:19:21.541 "name": null, 00:19:21.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:21.541 "is_configured": false, 00:19:21.541 "data_offset": 2048, 00:19:21.541 "data_size": 63488 00:19:21.541 }, 00:19:21.541 { 00:19:21.541 "name": null, 00:19:21.541 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:21.541 "is_configured": false, 00:19:21.541 "data_offset": 2048, 00:19:21.541 "data_size": 63488 00:19:21.541 } 00:19:21.541 ] 00:19:21.541 }' 00:19:21.541 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.541 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.799 [2024-12-06 15:46:04.969708] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:21.799 [2024-12-06 15:46:04.969800] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.799 [2024-12-06 15:46:04.969830] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:21.799 [2024-12-06 15:46:04.969844] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.799 [2024-12-06 15:46:04.970540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.799 [2024-12-06 15:46:04.970573] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:21.799 [2024-12-06 15:46:04.970692] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:21.799 [2024-12-06 15:46:04.970722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:21.799 pt2 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.799 [2024-12-06 15:46:04.981658] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:21.799 [2024-12-06 15:46:04.981723] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.799 [2024-12-06 15:46:04.981763] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:21.799 [2024-12-06 15:46:04.981776] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.799 [2024-12-06 15:46:04.982228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.799 [2024-12-06 15:46:04.982247] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:21.799 [2024-12-06 15:46:04.982325] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:21.799 [2024-12-06 15:46:04.982354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:21.799 pt3 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.799 15:46:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.799 [2024-12-06 15:46:04.993612] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:21.799 [2024-12-06 15:46:04.993661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.799 [2024-12-06 15:46:04.993684] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:21.799 [2024-12-06 15:46:04.993696] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.799 [2024-12-06 15:46:04.994171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.799 [2024-12-06 15:46:04.994190] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:21.799 [2024-12-06 15:46:04.994266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:21.799 [2024-12-06 15:46:04.994290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:21.799 [2024-12-06 15:46:04.994448] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:21.799 [2024-12-06 15:46:04.994458] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:21.799 [2024-12-06 15:46:04.994742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:21.799 [2024-12-06 15:46:05.002045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:21.799 [2024-12-06 15:46:05.002071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:21.799 [2024-12-06 15:46:05.002255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.799 pt4 00:19:21.799 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.799 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:21.799 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:21.799 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:21.799 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.799 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.799 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.799 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.799 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:21.799 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.799 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.799 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.799 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.799 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.800 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.800 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.800 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.800 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.800 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.800 "name": "raid_bdev1", 00:19:21.800 "uuid": "78c3dcd4-6a86-4808-aa80-4345f48856b8", 00:19:21.800 "strip_size_kb": 64, 00:19:21.800 "state": "online", 00:19:21.800 "raid_level": "raid5f", 00:19:21.800 "superblock": true, 00:19:21.800 "num_base_bdevs": 4, 00:19:21.800 "num_base_bdevs_discovered": 4, 00:19:21.800 "num_base_bdevs_operational": 4, 00:19:21.800 "base_bdevs_list": [ 00:19:21.800 { 00:19:21.800 "name": "pt1", 00:19:21.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:21.800 "is_configured": true, 00:19:21.800 "data_offset": 2048, 00:19:21.800 "data_size": 63488 00:19:21.800 }, 00:19:21.800 { 00:19:21.800 "name": "pt2", 00:19:21.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.800 "is_configured": true, 00:19:21.800 "data_offset": 2048, 00:19:21.800 "data_size": 63488 00:19:21.800 }, 00:19:21.800 { 00:19:21.800 "name": "pt3", 00:19:21.800 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:21.800 "is_configured": true, 00:19:21.800 "data_offset": 2048, 00:19:21.800 "data_size": 63488 00:19:21.800 }, 00:19:21.800 { 00:19:21.800 "name": "pt4", 00:19:21.800 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:21.800 "is_configured": true, 00:19:21.800 "data_offset": 2048, 00:19:21.800 "data_size": 63488 00:19:21.800 } 00:19:21.800 ] 00:19:21.800 }' 00:19:21.800 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.800 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.366 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:22.366 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:22.366 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:22.366 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:22.366 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:22.366 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:22.366 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:22.366 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.366 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:22.366 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.366 [2024-12-06 15:46:05.423665] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:22.366 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.366 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:22.366 "name": "raid_bdev1", 00:19:22.366 "aliases": [ 00:19:22.366 "78c3dcd4-6a86-4808-aa80-4345f48856b8" 00:19:22.366 ], 00:19:22.366 "product_name": "Raid Volume", 00:19:22.366 "block_size": 512, 00:19:22.366 "num_blocks": 190464, 00:19:22.366 "uuid": "78c3dcd4-6a86-4808-aa80-4345f48856b8", 00:19:22.366 "assigned_rate_limits": { 00:19:22.366 "rw_ios_per_sec": 0, 00:19:22.366 "rw_mbytes_per_sec": 0, 00:19:22.366 "r_mbytes_per_sec": 0, 00:19:22.366 "w_mbytes_per_sec": 0 00:19:22.366 }, 00:19:22.366 "claimed": false, 00:19:22.366 "zoned": false, 00:19:22.366 "supported_io_types": { 00:19:22.366 "read": true, 00:19:22.366 "write": true, 00:19:22.366 "unmap": false, 00:19:22.366 "flush": false, 00:19:22.366 "reset": true, 00:19:22.366 "nvme_admin": false, 00:19:22.366 "nvme_io": false, 00:19:22.366 "nvme_io_md": false, 00:19:22.366 "write_zeroes": true, 00:19:22.366 "zcopy": false, 00:19:22.366 "get_zone_info": false, 00:19:22.366 "zone_management": false, 00:19:22.366 "zone_append": false, 00:19:22.366 "compare": false, 00:19:22.366 "compare_and_write": false, 00:19:22.366 "abort": false, 00:19:22.366 "seek_hole": false, 00:19:22.366 "seek_data": false, 00:19:22.366 "copy": false, 00:19:22.366 "nvme_iov_md": false 00:19:22.366 }, 00:19:22.366 "driver_specific": { 00:19:22.366 "raid": { 00:19:22.366 "uuid": "78c3dcd4-6a86-4808-aa80-4345f48856b8", 00:19:22.366 "strip_size_kb": 64, 00:19:22.366 "state": "online", 00:19:22.366 "raid_level": "raid5f", 00:19:22.366 "superblock": true, 00:19:22.366 "num_base_bdevs": 4, 00:19:22.366 "num_base_bdevs_discovered": 4, 00:19:22.366 "num_base_bdevs_operational": 4, 00:19:22.366 "base_bdevs_list": [ 00:19:22.366 { 00:19:22.366 "name": "pt1", 00:19:22.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:22.366 "is_configured": true, 00:19:22.366 "data_offset": 2048, 00:19:22.366 "data_size": 63488 00:19:22.366 }, 00:19:22.366 { 00:19:22.366 "name": "pt2", 00:19:22.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.366 "is_configured": true, 00:19:22.367 "data_offset": 2048, 00:19:22.367 "data_size": 63488 00:19:22.367 }, 00:19:22.367 { 00:19:22.367 "name": "pt3", 00:19:22.367 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:22.367 "is_configured": true, 00:19:22.367 "data_offset": 2048, 00:19:22.367 "data_size": 63488 00:19:22.367 }, 00:19:22.367 { 00:19:22.367 "name": "pt4", 00:19:22.367 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:22.367 "is_configured": true, 00:19:22.367 "data_offset": 2048, 00:19:22.367 "data_size": 63488 00:19:22.367 } 00:19:22.367 ] 00:19:22.367 } 00:19:22.367 } 00:19:22.367 }' 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:22.367 pt2 00:19:22.367 pt3 00:19:22.367 pt4' 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.367 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.626 [2024-12-06 15:46:05.755078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 78c3dcd4-6a86-4808-aa80-4345f48856b8 '!=' 78c3dcd4-6a86-4808-aa80-4345f48856b8 ']' 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.626 [2024-12-06 15:46:05.799008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.626 "name": "raid_bdev1", 00:19:22.626 "uuid": "78c3dcd4-6a86-4808-aa80-4345f48856b8", 00:19:22.626 "strip_size_kb": 64, 00:19:22.626 "state": "online", 00:19:22.626 "raid_level": "raid5f", 00:19:22.626 "superblock": true, 00:19:22.626 "num_base_bdevs": 4, 00:19:22.626 "num_base_bdevs_discovered": 3, 00:19:22.626 "num_base_bdevs_operational": 3, 00:19:22.626 "base_bdevs_list": [ 00:19:22.626 { 00:19:22.626 "name": null, 00:19:22.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.626 "is_configured": false, 00:19:22.626 "data_offset": 0, 00:19:22.626 "data_size": 63488 00:19:22.626 }, 00:19:22.626 { 00:19:22.626 "name": "pt2", 00:19:22.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.626 "is_configured": true, 00:19:22.626 "data_offset": 2048, 00:19:22.626 "data_size": 63488 00:19:22.626 }, 00:19:22.626 { 00:19:22.626 "name": "pt3", 00:19:22.626 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:22.626 "is_configured": true, 00:19:22.626 "data_offset": 2048, 00:19:22.626 "data_size": 63488 00:19:22.626 }, 00:19:22.626 { 00:19:22.626 "name": "pt4", 00:19:22.626 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:22.626 "is_configured": true, 00:19:22.626 "data_offset": 2048, 00:19:22.626 "data_size": 63488 00:19:22.626 } 00:19:22.626 ] 00:19:22.626 }' 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.626 15:46:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.194 [2024-12-06 15:46:06.206710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.194 [2024-12-06 15:46:06.206753] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:23.194 [2024-12-06 15:46:06.206862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.194 [2024-12-06 15:46:06.206965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.194 [2024-12-06 15:46:06.206979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.194 [2024-12-06 15:46:06.298638] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:23.194 [2024-12-06 15:46:06.298701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.194 [2024-12-06 15:46:06.298727] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:23.194 [2024-12-06 15:46:06.298740] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.194 [2024-12-06 15:46:06.301637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.194 [2024-12-06 15:46:06.301678] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:23.194 [2024-12-06 15:46:06.301778] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:23.194 [2024-12-06 15:46:06.301830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:23.194 pt2 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.194 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.195 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.195 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.195 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.195 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.195 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.195 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.195 "name": "raid_bdev1", 00:19:23.195 "uuid": "78c3dcd4-6a86-4808-aa80-4345f48856b8", 00:19:23.195 "strip_size_kb": 64, 00:19:23.195 "state": "configuring", 00:19:23.195 "raid_level": "raid5f", 00:19:23.195 "superblock": true, 00:19:23.195 "num_base_bdevs": 4, 00:19:23.195 "num_base_bdevs_discovered": 1, 00:19:23.195 "num_base_bdevs_operational": 3, 00:19:23.195 "base_bdevs_list": [ 00:19:23.195 { 00:19:23.195 "name": null, 00:19:23.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.195 "is_configured": false, 00:19:23.195 "data_offset": 2048, 00:19:23.195 "data_size": 63488 00:19:23.195 }, 00:19:23.195 { 00:19:23.195 "name": "pt2", 00:19:23.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:23.195 "is_configured": true, 00:19:23.195 "data_offset": 2048, 00:19:23.195 "data_size": 63488 00:19:23.195 }, 00:19:23.195 { 00:19:23.195 "name": null, 00:19:23.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:23.195 "is_configured": false, 00:19:23.195 "data_offset": 2048, 00:19:23.195 "data_size": 63488 00:19:23.195 }, 00:19:23.195 { 00:19:23.195 "name": null, 00:19:23.195 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:23.195 "is_configured": false, 00:19:23.195 "data_offset": 2048, 00:19:23.195 "data_size": 63488 00:19:23.195 } 00:19:23.195 ] 00:19:23.195 }' 00:19:23.195 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.195 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.453 [2024-12-06 15:46:06.702300] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:23.453 [2024-12-06 15:46:06.702582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.453 [2024-12-06 15:46:06.702655] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:23.453 [2024-12-06 15:46:06.702671] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.453 [2024-12-06 15:46:06.703248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.453 [2024-12-06 15:46:06.703280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:23.453 [2024-12-06 15:46:06.703396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:23.453 [2024-12-06 15:46:06.703426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:23.453 pt3 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.453 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.453 "name": "raid_bdev1", 00:19:23.453 "uuid": "78c3dcd4-6a86-4808-aa80-4345f48856b8", 00:19:23.453 "strip_size_kb": 64, 00:19:23.453 "state": "configuring", 00:19:23.453 "raid_level": "raid5f", 00:19:23.453 "superblock": true, 00:19:23.453 "num_base_bdevs": 4, 00:19:23.453 "num_base_bdevs_discovered": 2, 00:19:23.453 "num_base_bdevs_operational": 3, 00:19:23.453 "base_bdevs_list": [ 00:19:23.453 { 00:19:23.453 "name": null, 00:19:23.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.453 "is_configured": false, 00:19:23.453 "data_offset": 2048, 00:19:23.453 "data_size": 63488 00:19:23.453 }, 00:19:23.453 { 00:19:23.453 "name": "pt2", 00:19:23.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:23.453 "is_configured": true, 00:19:23.453 "data_offset": 2048, 00:19:23.453 "data_size": 63488 00:19:23.453 }, 00:19:23.453 { 00:19:23.453 "name": "pt3", 00:19:23.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:23.453 "is_configured": true, 00:19:23.453 "data_offset": 2048, 00:19:23.453 "data_size": 63488 00:19:23.453 }, 00:19:23.453 { 00:19:23.453 "name": null, 00:19:23.453 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:23.453 "is_configured": false, 00:19:23.453 "data_offset": 2048, 00:19:23.453 "data_size": 63488 00:19:23.453 } 00:19:23.453 ] 00:19:23.453 }' 00:19:23.712 15:46:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.712 15:46:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.971 [2024-12-06 15:46:07.081730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:23.971 [2024-12-06 15:46:07.081796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.971 [2024-12-06 15:46:07.081826] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:23.971 [2024-12-06 15:46:07.081839] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.971 [2024-12-06 15:46:07.082391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.971 [2024-12-06 15:46:07.082413] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:23.971 [2024-12-06 15:46:07.082524] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:23.971 [2024-12-06 15:46:07.082560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:23.971 [2024-12-06 15:46:07.082729] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:23.971 [2024-12-06 15:46:07.082739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:23.971 [2024-12-06 15:46:07.083047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:23.971 [2024-12-06 15:46:07.090765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:23.971 [2024-12-06 15:46:07.090793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:23.971 [2024-12-06 15:46:07.091125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.971 pt4 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.971 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.972 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.972 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.972 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.972 "name": "raid_bdev1", 00:19:23.972 "uuid": "78c3dcd4-6a86-4808-aa80-4345f48856b8", 00:19:23.972 "strip_size_kb": 64, 00:19:23.972 "state": "online", 00:19:23.972 "raid_level": "raid5f", 00:19:23.972 "superblock": true, 00:19:23.972 "num_base_bdevs": 4, 00:19:23.972 "num_base_bdevs_discovered": 3, 00:19:23.972 "num_base_bdevs_operational": 3, 00:19:23.972 "base_bdevs_list": [ 00:19:23.972 { 00:19:23.972 "name": null, 00:19:23.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.972 "is_configured": false, 00:19:23.972 "data_offset": 2048, 00:19:23.972 "data_size": 63488 00:19:23.972 }, 00:19:23.972 { 00:19:23.972 "name": "pt2", 00:19:23.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:23.972 "is_configured": true, 00:19:23.972 "data_offset": 2048, 00:19:23.972 "data_size": 63488 00:19:23.972 }, 00:19:23.972 { 00:19:23.972 "name": "pt3", 00:19:23.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:23.972 "is_configured": true, 00:19:23.972 "data_offset": 2048, 00:19:23.972 "data_size": 63488 00:19:23.972 }, 00:19:23.972 { 00:19:23.972 "name": "pt4", 00:19:23.972 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:23.972 "is_configured": true, 00:19:23.972 "data_offset": 2048, 00:19:23.972 "data_size": 63488 00:19:23.972 } 00:19:23.972 ] 00:19:23.972 }' 00:19:23.972 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.972 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.541 [2024-12-06 15:46:07.528646] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:24.541 [2024-12-06 15:46:07.528676] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:24.541 [2024-12-06 15:46:07.528761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:24.541 [2024-12-06 15:46:07.528845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:24.541 [2024-12-06 15:46:07.528862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.541 [2024-12-06 15:46:07.600616] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:24.541 [2024-12-06 15:46:07.600686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.541 [2024-12-06 15:46:07.600718] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:19:24.541 [2024-12-06 15:46:07.600735] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.541 [2024-12-06 15:46:07.603704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.541 [2024-12-06 15:46:07.603878] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:24.541 [2024-12-06 15:46:07.603993] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:24.541 [2024-12-06 15:46:07.604058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:24.541 [2024-12-06 15:46:07.604215] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:24.541 [2024-12-06 15:46:07.604231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:24.541 [2024-12-06 15:46:07.604248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:24.541 [2024-12-06 15:46:07.604317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:24.541 [2024-12-06 15:46:07.604423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:24.541 pt1 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.541 "name": "raid_bdev1", 00:19:24.541 "uuid": "78c3dcd4-6a86-4808-aa80-4345f48856b8", 00:19:24.541 "strip_size_kb": 64, 00:19:24.541 "state": "configuring", 00:19:24.541 "raid_level": "raid5f", 00:19:24.541 "superblock": true, 00:19:24.541 "num_base_bdevs": 4, 00:19:24.541 "num_base_bdevs_discovered": 2, 00:19:24.541 "num_base_bdevs_operational": 3, 00:19:24.541 "base_bdevs_list": [ 00:19:24.541 { 00:19:24.541 "name": null, 00:19:24.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.541 "is_configured": false, 00:19:24.541 "data_offset": 2048, 00:19:24.541 "data_size": 63488 00:19:24.541 }, 00:19:24.541 { 00:19:24.541 "name": "pt2", 00:19:24.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:24.541 "is_configured": true, 00:19:24.541 "data_offset": 2048, 00:19:24.541 "data_size": 63488 00:19:24.541 }, 00:19:24.541 { 00:19:24.541 "name": "pt3", 00:19:24.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:24.541 "is_configured": true, 00:19:24.541 "data_offset": 2048, 00:19:24.541 "data_size": 63488 00:19:24.541 }, 00:19:24.541 { 00:19:24.541 "name": null, 00:19:24.541 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:24.541 "is_configured": false, 00:19:24.541 "data_offset": 2048, 00:19:24.541 "data_size": 63488 00:19:24.541 } 00:19:24.541 ] 00:19:24.541 }' 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.541 15:46:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.801 [2024-12-06 15:46:08.068025] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:24.801 [2024-12-06 15:46:08.068081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.801 [2024-12-06 15:46:08.068107] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:24.801 [2024-12-06 15:46:08.068119] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.801 [2024-12-06 15:46:08.068612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.801 [2024-12-06 15:46:08.068632] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:24.801 [2024-12-06 15:46:08.068713] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:24.801 [2024-12-06 15:46:08.068735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:24.801 [2024-12-06 15:46:08.068886] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:24.801 [2024-12-06 15:46:08.068896] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:24.801 [2024-12-06 15:46:08.069191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:24.801 [2024-12-06 15:46:08.077014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:24.801 [2024-12-06 15:46:08.077051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:24.801 [2024-12-06 15:46:08.077346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.801 pt4 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.801 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.061 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.061 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.061 "name": "raid_bdev1", 00:19:25.061 "uuid": "78c3dcd4-6a86-4808-aa80-4345f48856b8", 00:19:25.061 "strip_size_kb": 64, 00:19:25.061 "state": "online", 00:19:25.061 "raid_level": "raid5f", 00:19:25.061 "superblock": true, 00:19:25.061 "num_base_bdevs": 4, 00:19:25.061 "num_base_bdevs_discovered": 3, 00:19:25.061 "num_base_bdevs_operational": 3, 00:19:25.061 "base_bdevs_list": [ 00:19:25.061 { 00:19:25.061 "name": null, 00:19:25.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.061 "is_configured": false, 00:19:25.061 "data_offset": 2048, 00:19:25.061 "data_size": 63488 00:19:25.061 }, 00:19:25.061 { 00:19:25.061 "name": "pt2", 00:19:25.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:25.061 "is_configured": true, 00:19:25.061 "data_offset": 2048, 00:19:25.061 "data_size": 63488 00:19:25.061 }, 00:19:25.061 { 00:19:25.061 "name": "pt3", 00:19:25.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:25.061 "is_configured": true, 00:19:25.061 "data_offset": 2048, 00:19:25.061 "data_size": 63488 00:19:25.061 }, 00:19:25.061 { 00:19:25.061 "name": "pt4", 00:19:25.061 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:25.061 "is_configured": true, 00:19:25.061 "data_offset": 2048, 00:19:25.061 "data_size": 63488 00:19:25.061 } 00:19:25.061 ] 00:19:25.061 }' 00:19:25.061 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.061 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 [2024-12-06 15:46:08.550690] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 78c3dcd4-6a86-4808-aa80-4345f48856b8 '!=' 78c3dcd4-6a86-4808-aa80-4345f48856b8 ']' 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84120 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84120 ']' 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84120 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.321 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84120 00:19:25.580 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.580 killing process with pid 84120 00:19:25.580 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.580 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84120' 00:19:25.580 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84120 00:19:25.580 [2024-12-06 15:46:08.619376] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.580 [2024-12-06 15:46:08.619471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.580 15:46:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84120 00:19:25.580 [2024-12-06 15:46:08.619570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.580 [2024-12-06 15:46:08.619591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:25.839 [2024-12-06 15:46:09.034170] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.218 15:46:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:27.218 00:19:27.218 real 0m8.357s 00:19:27.218 user 0m12.780s 00:19:27.218 sys 0m1.912s 00:19:27.218 15:46:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.218 15:46:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.218 ************************************ 00:19:27.218 END TEST raid5f_superblock_test 00:19:27.218 ************************************ 00:19:27.218 15:46:10 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:19:27.218 15:46:10 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:19:27.218 15:46:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:27.218 15:46:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.218 15:46:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.218 ************************************ 00:19:27.218 START TEST raid5f_rebuild_test 00:19:27.218 ************************************ 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:27.218 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:27.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84600 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84600 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84600 ']' 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.219 15:46:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.219 [2024-12-06 15:46:10.447859] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:19:27.219 [2024-12-06 15:46:10.448252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:19:27.219 Zero copy mechanism will not be used. 00:19:27.219 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84600 ] 00:19:27.478 [2024-12-06 15:46:10.633661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.478 [2024-12-06 15:46:10.765587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.737 [2024-12-06 15:46:11.006540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.737 [2024-12-06 15:46:11.006857] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.997 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.997 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:19:27.997 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:27.997 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:27.997 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.997 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.257 BaseBdev1_malloc 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.257 [2024-12-06 15:46:11.327775] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:28.257 [2024-12-06 15:46:11.327989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.257 [2024-12-06 15:46:11.328027] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:28.257 [2024-12-06 15:46:11.328043] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.257 [2024-12-06 15:46:11.330745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.257 [2024-12-06 15:46:11.330791] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:28.257 BaseBdev1 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.257 BaseBdev2_malloc 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.257 [2024-12-06 15:46:11.391414] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:28.257 [2024-12-06 15:46:11.391494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.257 [2024-12-06 15:46:11.391542] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:28.257 [2024-12-06 15:46:11.391558] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.257 [2024-12-06 15:46:11.394210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.257 [2024-12-06 15:46:11.394384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:28.257 BaseBdev2 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.257 BaseBdev3_malloc 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.257 [2024-12-06 15:46:11.462594] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:28.257 [2024-12-06 15:46:11.462777] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.257 [2024-12-06 15:46:11.462810] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:28.257 [2024-12-06 15:46:11.462826] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.257 [2024-12-06 15:46:11.465570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.257 [2024-12-06 15:46:11.465614] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:28.257 BaseBdev3 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:28.257 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:28.258 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.258 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.258 BaseBdev4_malloc 00:19:28.258 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.258 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:28.258 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.258 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.258 [2024-12-06 15:46:11.524696] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:28.258 [2024-12-06 15:46:11.524881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.258 [2024-12-06 15:46:11.524911] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:28.258 [2024-12-06 15:46:11.524926] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.258 [2024-12-06 15:46:11.527579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.258 [2024-12-06 15:46:11.527624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:28.258 BaseBdev4 00:19:28.258 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.258 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:28.258 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.258 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.517 spare_malloc 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.518 spare_delay 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.518 [2024-12-06 15:46:11.599428] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:28.518 [2024-12-06 15:46:11.599609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.518 [2024-12-06 15:46:11.599637] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:28.518 [2024-12-06 15:46:11.599652] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.518 [2024-12-06 15:46:11.602300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.518 [2024-12-06 15:46:11.602343] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:28.518 spare 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.518 [2024-12-06 15:46:11.611478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:28.518 [2024-12-06 15:46:11.613959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:28.518 [2024-12-06 15:46:11.614023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:28.518 [2024-12-06 15:46:11.614076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:28.518 [2024-12-06 15:46:11.614168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:28.518 [2024-12-06 15:46:11.614183] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:28.518 [2024-12-06 15:46:11.614466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:28.518 [2024-12-06 15:46:11.622237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:28.518 [2024-12-06 15:46:11.622259] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:28.518 [2024-12-06 15:46:11.622460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.518 "name": "raid_bdev1", 00:19:28.518 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:28.518 "strip_size_kb": 64, 00:19:28.518 "state": "online", 00:19:28.518 "raid_level": "raid5f", 00:19:28.518 "superblock": false, 00:19:28.518 "num_base_bdevs": 4, 00:19:28.518 "num_base_bdevs_discovered": 4, 00:19:28.518 "num_base_bdevs_operational": 4, 00:19:28.518 "base_bdevs_list": [ 00:19:28.518 { 00:19:28.518 "name": "BaseBdev1", 00:19:28.518 "uuid": "7a0eb143-1fec-52ac-9f87-68e8fbde3f2e", 00:19:28.518 "is_configured": true, 00:19:28.518 "data_offset": 0, 00:19:28.518 "data_size": 65536 00:19:28.518 }, 00:19:28.518 { 00:19:28.518 "name": "BaseBdev2", 00:19:28.518 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:28.518 "is_configured": true, 00:19:28.518 "data_offset": 0, 00:19:28.518 "data_size": 65536 00:19:28.518 }, 00:19:28.518 { 00:19:28.518 "name": "BaseBdev3", 00:19:28.518 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:28.518 "is_configured": true, 00:19:28.518 "data_offset": 0, 00:19:28.518 "data_size": 65536 00:19:28.518 }, 00:19:28.518 { 00:19:28.518 "name": "BaseBdev4", 00:19:28.518 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:28.518 "is_configured": true, 00:19:28.518 "data_offset": 0, 00:19:28.518 "data_size": 65536 00:19:28.518 } 00:19:28.518 ] 00:19:28.518 }' 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.518 15:46:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.777 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:28.777 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:28.777 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.777 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.777 [2024-12-06 15:46:12.047859] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:29.037 [2024-12-06 15:46:12.291343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:29.037 /dev/nbd0 00:19:29.037 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:29.296 1+0 records in 00:19:29.296 1+0 records out 00:19:29.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366297 s, 11.2 MB/s 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:29.296 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:19:29.864 512+0 records in 00:19:29.864 512+0 records out 00:19:29.864 100663296 bytes (101 MB, 96 MiB) copied, 0.51494 s, 195 MB/s 00:19:29.864 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:29.864 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:29.864 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:29.864 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:29.864 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:29.864 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.864 15:46:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:29.864 [2024-12-06 15:46:13.087116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.864 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:29.864 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:29.864 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:29.864 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.864 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.864 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:29.864 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.865 [2024-12-06 15:46:13.132846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.865 15:46:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.124 15:46:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.124 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.124 "name": "raid_bdev1", 00:19:30.124 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:30.124 "strip_size_kb": 64, 00:19:30.124 "state": "online", 00:19:30.124 "raid_level": "raid5f", 00:19:30.124 "superblock": false, 00:19:30.124 "num_base_bdevs": 4, 00:19:30.124 "num_base_bdevs_discovered": 3, 00:19:30.124 "num_base_bdevs_operational": 3, 00:19:30.124 "base_bdevs_list": [ 00:19:30.124 { 00:19:30.124 "name": null, 00:19:30.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.124 "is_configured": false, 00:19:30.124 "data_offset": 0, 00:19:30.124 "data_size": 65536 00:19:30.124 }, 00:19:30.124 { 00:19:30.124 "name": "BaseBdev2", 00:19:30.124 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:30.124 "is_configured": true, 00:19:30.124 "data_offset": 0, 00:19:30.124 "data_size": 65536 00:19:30.124 }, 00:19:30.124 { 00:19:30.124 "name": "BaseBdev3", 00:19:30.124 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:30.124 "is_configured": true, 00:19:30.124 "data_offset": 0, 00:19:30.124 "data_size": 65536 00:19:30.124 }, 00:19:30.124 { 00:19:30.124 "name": "BaseBdev4", 00:19:30.124 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:30.124 "is_configured": true, 00:19:30.124 "data_offset": 0, 00:19:30.124 "data_size": 65536 00:19:30.124 } 00:19:30.124 ] 00:19:30.124 }' 00:19:30.124 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.124 15:46:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.397 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:30.397 15:46:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.397 15:46:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.397 [2024-12-06 15:46:13.560290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:30.397 [2024-12-06 15:46:13.577819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:19:30.397 15:46:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.397 15:46:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:30.397 [2024-12-06 15:46:13.588085] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:31.376 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.376 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.376 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:31.376 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:31.376 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.376 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.376 15:46:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.376 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.376 15:46:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.376 15:46:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.376 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.376 "name": "raid_bdev1", 00:19:31.377 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:31.377 "strip_size_kb": 64, 00:19:31.377 "state": "online", 00:19:31.377 "raid_level": "raid5f", 00:19:31.377 "superblock": false, 00:19:31.377 "num_base_bdevs": 4, 00:19:31.377 "num_base_bdevs_discovered": 4, 00:19:31.377 "num_base_bdevs_operational": 4, 00:19:31.377 "process": { 00:19:31.377 "type": "rebuild", 00:19:31.377 "target": "spare", 00:19:31.377 "progress": { 00:19:31.377 "blocks": 19200, 00:19:31.377 "percent": 9 00:19:31.377 } 00:19:31.377 }, 00:19:31.377 "base_bdevs_list": [ 00:19:31.377 { 00:19:31.377 "name": "spare", 00:19:31.377 "uuid": "e176850d-1cfa-54c1-9c56-29b536efdc4f", 00:19:31.377 "is_configured": true, 00:19:31.377 "data_offset": 0, 00:19:31.377 "data_size": 65536 00:19:31.377 }, 00:19:31.377 { 00:19:31.377 "name": "BaseBdev2", 00:19:31.377 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:31.377 "is_configured": true, 00:19:31.377 "data_offset": 0, 00:19:31.377 "data_size": 65536 00:19:31.377 }, 00:19:31.377 { 00:19:31.377 "name": "BaseBdev3", 00:19:31.377 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:31.377 "is_configured": true, 00:19:31.377 "data_offset": 0, 00:19:31.377 "data_size": 65536 00:19:31.377 }, 00:19:31.377 { 00:19:31.377 "name": "BaseBdev4", 00:19:31.377 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:31.377 "is_configured": true, 00:19:31.377 "data_offset": 0, 00:19:31.377 "data_size": 65536 00:19:31.377 } 00:19:31.377 ] 00:19:31.377 }' 00:19:31.377 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.636 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:31.636 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.636 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:31.636 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.637 [2024-12-06 15:46:14.719646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.637 [2024-12-06 15:46:14.796317] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:31.637 [2024-12-06 15:46:14.796399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.637 [2024-12-06 15:46:14.796420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.637 [2024-12-06 15:46:14.796433] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.637 "name": "raid_bdev1", 00:19:31.637 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:31.637 "strip_size_kb": 64, 00:19:31.637 "state": "online", 00:19:31.637 "raid_level": "raid5f", 00:19:31.637 "superblock": false, 00:19:31.637 "num_base_bdevs": 4, 00:19:31.637 "num_base_bdevs_discovered": 3, 00:19:31.637 "num_base_bdevs_operational": 3, 00:19:31.637 "base_bdevs_list": [ 00:19:31.637 { 00:19:31.637 "name": null, 00:19:31.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.637 "is_configured": false, 00:19:31.637 "data_offset": 0, 00:19:31.637 "data_size": 65536 00:19:31.637 }, 00:19:31.637 { 00:19:31.637 "name": "BaseBdev2", 00:19:31.637 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:31.637 "is_configured": true, 00:19:31.637 "data_offset": 0, 00:19:31.637 "data_size": 65536 00:19:31.637 }, 00:19:31.637 { 00:19:31.637 "name": "BaseBdev3", 00:19:31.637 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:31.637 "is_configured": true, 00:19:31.637 "data_offset": 0, 00:19:31.637 "data_size": 65536 00:19:31.637 }, 00:19:31.637 { 00:19:31.637 "name": "BaseBdev4", 00:19:31.637 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:31.637 "is_configured": true, 00:19:31.637 "data_offset": 0, 00:19:31.637 "data_size": 65536 00:19:31.637 } 00:19:31.637 ] 00:19:31.637 }' 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.637 15:46:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.217 "name": "raid_bdev1", 00:19:32.217 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:32.217 "strip_size_kb": 64, 00:19:32.217 "state": "online", 00:19:32.217 "raid_level": "raid5f", 00:19:32.217 "superblock": false, 00:19:32.217 "num_base_bdevs": 4, 00:19:32.217 "num_base_bdevs_discovered": 3, 00:19:32.217 "num_base_bdevs_operational": 3, 00:19:32.217 "base_bdevs_list": [ 00:19:32.217 { 00:19:32.217 "name": null, 00:19:32.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.217 "is_configured": false, 00:19:32.217 "data_offset": 0, 00:19:32.217 "data_size": 65536 00:19:32.217 }, 00:19:32.217 { 00:19:32.217 "name": "BaseBdev2", 00:19:32.217 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:32.217 "is_configured": true, 00:19:32.217 "data_offset": 0, 00:19:32.217 "data_size": 65536 00:19:32.217 }, 00:19:32.217 { 00:19:32.217 "name": "BaseBdev3", 00:19:32.217 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:32.217 "is_configured": true, 00:19:32.217 "data_offset": 0, 00:19:32.217 "data_size": 65536 00:19:32.217 }, 00:19:32.217 { 00:19:32.217 "name": "BaseBdev4", 00:19:32.217 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:32.217 "is_configured": true, 00:19:32.217 "data_offset": 0, 00:19:32.217 "data_size": 65536 00:19:32.217 } 00:19:32.217 ] 00:19:32.217 }' 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.217 [2024-12-06 15:46:15.370662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:32.217 [2024-12-06 15:46:15.386927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.217 15:46:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:32.217 [2024-12-06 15:46:15.397123] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:33.152 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.152 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.152 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:33.152 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:33.152 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.152 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.152 15:46:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.152 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.152 15:46:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.152 15:46:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.152 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.152 "name": "raid_bdev1", 00:19:33.152 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:33.152 "strip_size_kb": 64, 00:19:33.152 "state": "online", 00:19:33.152 "raid_level": "raid5f", 00:19:33.152 "superblock": false, 00:19:33.152 "num_base_bdevs": 4, 00:19:33.152 "num_base_bdevs_discovered": 4, 00:19:33.152 "num_base_bdevs_operational": 4, 00:19:33.152 "process": { 00:19:33.152 "type": "rebuild", 00:19:33.152 "target": "spare", 00:19:33.152 "progress": { 00:19:33.152 "blocks": 19200, 00:19:33.152 "percent": 9 00:19:33.152 } 00:19:33.152 }, 00:19:33.152 "base_bdevs_list": [ 00:19:33.152 { 00:19:33.152 "name": "spare", 00:19:33.152 "uuid": "e176850d-1cfa-54c1-9c56-29b536efdc4f", 00:19:33.152 "is_configured": true, 00:19:33.152 "data_offset": 0, 00:19:33.152 "data_size": 65536 00:19:33.152 }, 00:19:33.152 { 00:19:33.152 "name": "BaseBdev2", 00:19:33.152 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:33.152 "is_configured": true, 00:19:33.152 "data_offset": 0, 00:19:33.152 "data_size": 65536 00:19:33.152 }, 00:19:33.152 { 00:19:33.152 "name": "BaseBdev3", 00:19:33.152 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:33.152 "is_configured": true, 00:19:33.152 "data_offset": 0, 00:19:33.152 "data_size": 65536 00:19:33.152 }, 00:19:33.152 { 00:19:33.152 "name": "BaseBdev4", 00:19:33.152 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:33.152 "is_configured": true, 00:19:33.152 "data_offset": 0, 00:19:33.152 "data_size": 65536 00:19:33.152 } 00:19:33.152 ] 00:19:33.152 }' 00:19:33.152 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=624 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.411 "name": "raid_bdev1", 00:19:33.411 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:33.411 "strip_size_kb": 64, 00:19:33.411 "state": "online", 00:19:33.411 "raid_level": "raid5f", 00:19:33.411 "superblock": false, 00:19:33.411 "num_base_bdevs": 4, 00:19:33.411 "num_base_bdevs_discovered": 4, 00:19:33.411 "num_base_bdevs_operational": 4, 00:19:33.411 "process": { 00:19:33.411 "type": "rebuild", 00:19:33.411 "target": "spare", 00:19:33.411 "progress": { 00:19:33.411 "blocks": 21120, 00:19:33.411 "percent": 10 00:19:33.411 } 00:19:33.411 }, 00:19:33.411 "base_bdevs_list": [ 00:19:33.411 { 00:19:33.411 "name": "spare", 00:19:33.411 "uuid": "e176850d-1cfa-54c1-9c56-29b536efdc4f", 00:19:33.411 "is_configured": true, 00:19:33.411 "data_offset": 0, 00:19:33.411 "data_size": 65536 00:19:33.411 }, 00:19:33.411 { 00:19:33.411 "name": "BaseBdev2", 00:19:33.411 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:33.411 "is_configured": true, 00:19:33.411 "data_offset": 0, 00:19:33.411 "data_size": 65536 00:19:33.411 }, 00:19:33.411 { 00:19:33.411 "name": "BaseBdev3", 00:19:33.411 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:33.411 "is_configured": true, 00:19:33.411 "data_offset": 0, 00:19:33.411 "data_size": 65536 00:19:33.411 }, 00:19:33.411 { 00:19:33.411 "name": "BaseBdev4", 00:19:33.411 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:33.411 "is_configured": true, 00:19:33.411 "data_offset": 0, 00:19:33.411 "data_size": 65536 00:19:33.411 } 00:19:33.411 ] 00:19:33.411 }' 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:33.411 15:46:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:34.785 15:46:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:34.785 15:46:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:34.785 15:46:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.785 15:46:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:34.785 15:46:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:34.785 15:46:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.785 15:46:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.785 15:46:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.785 15:46:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.785 15:46:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.785 15:46:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.785 15:46:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.785 "name": "raid_bdev1", 00:19:34.785 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:34.785 "strip_size_kb": 64, 00:19:34.785 "state": "online", 00:19:34.786 "raid_level": "raid5f", 00:19:34.786 "superblock": false, 00:19:34.786 "num_base_bdevs": 4, 00:19:34.786 "num_base_bdevs_discovered": 4, 00:19:34.786 "num_base_bdevs_operational": 4, 00:19:34.786 "process": { 00:19:34.786 "type": "rebuild", 00:19:34.786 "target": "spare", 00:19:34.786 "progress": { 00:19:34.786 "blocks": 42240, 00:19:34.786 "percent": 21 00:19:34.786 } 00:19:34.786 }, 00:19:34.786 "base_bdevs_list": [ 00:19:34.786 { 00:19:34.786 "name": "spare", 00:19:34.786 "uuid": "e176850d-1cfa-54c1-9c56-29b536efdc4f", 00:19:34.786 "is_configured": true, 00:19:34.786 "data_offset": 0, 00:19:34.786 "data_size": 65536 00:19:34.786 }, 00:19:34.786 { 00:19:34.786 "name": "BaseBdev2", 00:19:34.786 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:34.786 "is_configured": true, 00:19:34.786 "data_offset": 0, 00:19:34.786 "data_size": 65536 00:19:34.786 }, 00:19:34.786 { 00:19:34.786 "name": "BaseBdev3", 00:19:34.786 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:34.786 "is_configured": true, 00:19:34.786 "data_offset": 0, 00:19:34.786 "data_size": 65536 00:19:34.786 }, 00:19:34.786 { 00:19:34.786 "name": "BaseBdev4", 00:19:34.786 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:34.786 "is_configured": true, 00:19:34.786 "data_offset": 0, 00:19:34.786 "data_size": 65536 00:19:34.786 } 00:19:34.786 ] 00:19:34.786 }' 00:19:34.786 15:46:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.786 15:46:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.786 15:46:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.786 15:46:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.786 15:46:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.726 "name": "raid_bdev1", 00:19:35.726 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:35.726 "strip_size_kb": 64, 00:19:35.726 "state": "online", 00:19:35.726 "raid_level": "raid5f", 00:19:35.726 "superblock": false, 00:19:35.726 "num_base_bdevs": 4, 00:19:35.726 "num_base_bdevs_discovered": 4, 00:19:35.726 "num_base_bdevs_operational": 4, 00:19:35.726 "process": { 00:19:35.726 "type": "rebuild", 00:19:35.726 "target": "spare", 00:19:35.726 "progress": { 00:19:35.726 "blocks": 63360, 00:19:35.726 "percent": 32 00:19:35.726 } 00:19:35.726 }, 00:19:35.726 "base_bdevs_list": [ 00:19:35.726 { 00:19:35.726 "name": "spare", 00:19:35.726 "uuid": "e176850d-1cfa-54c1-9c56-29b536efdc4f", 00:19:35.726 "is_configured": true, 00:19:35.726 "data_offset": 0, 00:19:35.726 "data_size": 65536 00:19:35.726 }, 00:19:35.726 { 00:19:35.726 "name": "BaseBdev2", 00:19:35.726 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:35.726 "is_configured": true, 00:19:35.726 "data_offset": 0, 00:19:35.726 "data_size": 65536 00:19:35.726 }, 00:19:35.726 { 00:19:35.726 "name": "BaseBdev3", 00:19:35.726 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:35.726 "is_configured": true, 00:19:35.726 "data_offset": 0, 00:19:35.726 "data_size": 65536 00:19:35.726 }, 00:19:35.726 { 00:19:35.726 "name": "BaseBdev4", 00:19:35.726 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:35.726 "is_configured": true, 00:19:35.726 "data_offset": 0, 00:19:35.726 "data_size": 65536 00:19:35.726 } 00:19:35.726 ] 00:19:35.726 }' 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.726 15:46:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:36.662 15:46:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:36.662 15:46:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.662 15:46:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.662 15:46:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:36.662 15:46:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:36.662 15:46:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.662 15:46:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.662 15:46:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.662 15:46:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.662 15:46:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.921 15:46:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.921 15:46:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.921 "name": "raid_bdev1", 00:19:36.921 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:36.921 "strip_size_kb": 64, 00:19:36.921 "state": "online", 00:19:36.921 "raid_level": "raid5f", 00:19:36.921 "superblock": false, 00:19:36.921 "num_base_bdevs": 4, 00:19:36.921 "num_base_bdevs_discovered": 4, 00:19:36.921 "num_base_bdevs_operational": 4, 00:19:36.921 "process": { 00:19:36.921 "type": "rebuild", 00:19:36.921 "target": "spare", 00:19:36.921 "progress": { 00:19:36.921 "blocks": 86400, 00:19:36.921 "percent": 43 00:19:36.921 } 00:19:36.921 }, 00:19:36.921 "base_bdevs_list": [ 00:19:36.921 { 00:19:36.921 "name": "spare", 00:19:36.921 "uuid": "e176850d-1cfa-54c1-9c56-29b536efdc4f", 00:19:36.921 "is_configured": true, 00:19:36.921 "data_offset": 0, 00:19:36.921 "data_size": 65536 00:19:36.921 }, 00:19:36.921 { 00:19:36.921 "name": "BaseBdev2", 00:19:36.921 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:36.921 "is_configured": true, 00:19:36.921 "data_offset": 0, 00:19:36.921 "data_size": 65536 00:19:36.921 }, 00:19:36.921 { 00:19:36.921 "name": "BaseBdev3", 00:19:36.921 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:36.921 "is_configured": true, 00:19:36.921 "data_offset": 0, 00:19:36.921 "data_size": 65536 00:19:36.921 }, 00:19:36.921 { 00:19:36.921 "name": "BaseBdev4", 00:19:36.921 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:36.921 "is_configured": true, 00:19:36.921 "data_offset": 0, 00:19:36.921 "data_size": 65536 00:19:36.921 } 00:19:36.921 ] 00:19:36.921 }' 00:19:36.921 15:46:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.921 15:46:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.921 15:46:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.921 15:46:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.921 15:46:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:37.857 15:46:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:37.857 15:46:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.857 15:46:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.857 15:46:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.857 15:46:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.857 15:46:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.857 15:46:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.857 15:46:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.857 15:46:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.857 15:46:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.857 15:46:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.857 15:46:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.857 "name": "raid_bdev1", 00:19:37.857 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:37.857 "strip_size_kb": 64, 00:19:37.857 "state": "online", 00:19:37.857 "raid_level": "raid5f", 00:19:37.857 "superblock": false, 00:19:37.857 "num_base_bdevs": 4, 00:19:37.857 "num_base_bdevs_discovered": 4, 00:19:37.857 "num_base_bdevs_operational": 4, 00:19:37.857 "process": { 00:19:37.857 "type": "rebuild", 00:19:37.857 "target": "spare", 00:19:37.857 "progress": { 00:19:37.857 "blocks": 107520, 00:19:37.857 "percent": 54 00:19:37.857 } 00:19:37.857 }, 00:19:37.857 "base_bdevs_list": [ 00:19:37.857 { 00:19:37.857 "name": "spare", 00:19:37.857 "uuid": "e176850d-1cfa-54c1-9c56-29b536efdc4f", 00:19:37.857 "is_configured": true, 00:19:37.857 "data_offset": 0, 00:19:37.857 "data_size": 65536 00:19:37.857 }, 00:19:37.857 { 00:19:37.857 "name": "BaseBdev2", 00:19:37.857 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:37.857 "is_configured": true, 00:19:37.857 "data_offset": 0, 00:19:37.857 "data_size": 65536 00:19:37.857 }, 00:19:37.857 { 00:19:37.857 "name": "BaseBdev3", 00:19:37.857 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:37.857 "is_configured": true, 00:19:37.857 "data_offset": 0, 00:19:37.857 "data_size": 65536 00:19:37.857 }, 00:19:37.857 { 00:19:37.857 "name": "BaseBdev4", 00:19:37.857 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:37.857 "is_configured": true, 00:19:37.857 "data_offset": 0, 00:19:37.857 "data_size": 65536 00:19:37.857 } 00:19:37.857 ] 00:19:37.857 }' 00:19:37.857 15:46:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.142 15:46:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.142 15:46:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.142 15:46:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.142 15:46:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:39.078 15:46:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:39.078 15:46:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.078 15:46:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.078 15:46:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:39.078 15:46:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:39.078 15:46:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.078 15:46:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.078 15:46:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.078 15:46:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.078 15:46:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.078 15:46:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.078 15:46:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.078 "name": "raid_bdev1", 00:19:39.078 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:39.078 "strip_size_kb": 64, 00:19:39.078 "state": "online", 00:19:39.078 "raid_level": "raid5f", 00:19:39.078 "superblock": false, 00:19:39.078 "num_base_bdevs": 4, 00:19:39.078 "num_base_bdevs_discovered": 4, 00:19:39.078 "num_base_bdevs_operational": 4, 00:19:39.078 "process": { 00:19:39.078 "type": "rebuild", 00:19:39.078 "target": "spare", 00:19:39.078 "progress": { 00:19:39.078 "blocks": 128640, 00:19:39.078 "percent": 65 00:19:39.078 } 00:19:39.078 }, 00:19:39.078 "base_bdevs_list": [ 00:19:39.078 { 00:19:39.078 "name": "spare", 00:19:39.078 "uuid": "e176850d-1cfa-54c1-9c56-29b536efdc4f", 00:19:39.078 "is_configured": true, 00:19:39.078 "data_offset": 0, 00:19:39.078 "data_size": 65536 00:19:39.078 }, 00:19:39.078 { 00:19:39.078 "name": "BaseBdev2", 00:19:39.078 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:39.078 "is_configured": true, 00:19:39.078 "data_offset": 0, 00:19:39.078 "data_size": 65536 00:19:39.078 }, 00:19:39.078 { 00:19:39.078 "name": "BaseBdev3", 00:19:39.078 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:39.078 "is_configured": true, 00:19:39.078 "data_offset": 0, 00:19:39.078 "data_size": 65536 00:19:39.078 }, 00:19:39.078 { 00:19:39.078 "name": "BaseBdev4", 00:19:39.078 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:39.078 "is_configured": true, 00:19:39.078 "data_offset": 0, 00:19:39.078 "data_size": 65536 00:19:39.078 } 00:19:39.078 ] 00:19:39.078 }' 00:19:39.078 15:46:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.078 15:46:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:39.079 15:46:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.079 15:46:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:39.079 15:46:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:40.456 15:46:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:40.456 15:46:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.456 15:46:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.456 15:46:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.456 15:46:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.456 15:46:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.456 15:46:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.456 15:46:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.456 15:46:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.456 15:46:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.456 15:46:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.456 15:46:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.456 "name": "raid_bdev1", 00:19:40.456 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:40.456 "strip_size_kb": 64, 00:19:40.456 "state": "online", 00:19:40.456 "raid_level": "raid5f", 00:19:40.456 "superblock": false, 00:19:40.456 "num_base_bdevs": 4, 00:19:40.456 "num_base_bdevs_discovered": 4, 00:19:40.456 "num_base_bdevs_operational": 4, 00:19:40.456 "process": { 00:19:40.456 "type": "rebuild", 00:19:40.456 "target": "spare", 00:19:40.456 "progress": { 00:19:40.456 "blocks": 151680, 00:19:40.456 "percent": 77 00:19:40.456 } 00:19:40.456 }, 00:19:40.456 "base_bdevs_list": [ 00:19:40.456 { 00:19:40.456 "name": "spare", 00:19:40.456 "uuid": "e176850d-1cfa-54c1-9c56-29b536efdc4f", 00:19:40.456 "is_configured": true, 00:19:40.456 "data_offset": 0, 00:19:40.456 "data_size": 65536 00:19:40.456 }, 00:19:40.456 { 00:19:40.456 "name": "BaseBdev2", 00:19:40.456 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:40.457 "is_configured": true, 00:19:40.457 "data_offset": 0, 00:19:40.457 "data_size": 65536 00:19:40.457 }, 00:19:40.457 { 00:19:40.457 "name": "BaseBdev3", 00:19:40.457 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:40.457 "is_configured": true, 00:19:40.457 "data_offset": 0, 00:19:40.457 "data_size": 65536 00:19:40.457 }, 00:19:40.457 { 00:19:40.457 "name": "BaseBdev4", 00:19:40.457 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:40.457 "is_configured": true, 00:19:40.457 "data_offset": 0, 00:19:40.457 "data_size": 65536 00:19:40.457 } 00:19:40.457 ] 00:19:40.457 }' 00:19:40.457 15:46:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.457 15:46:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.457 15:46:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.457 15:46:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.457 15:46:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.394 "name": "raid_bdev1", 00:19:41.394 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:41.394 "strip_size_kb": 64, 00:19:41.394 "state": "online", 00:19:41.394 "raid_level": "raid5f", 00:19:41.394 "superblock": false, 00:19:41.394 "num_base_bdevs": 4, 00:19:41.394 "num_base_bdevs_discovered": 4, 00:19:41.394 "num_base_bdevs_operational": 4, 00:19:41.394 "process": { 00:19:41.394 "type": "rebuild", 00:19:41.394 "target": "spare", 00:19:41.394 "progress": { 00:19:41.394 "blocks": 172800, 00:19:41.394 "percent": 87 00:19:41.394 } 00:19:41.394 }, 00:19:41.394 "base_bdevs_list": [ 00:19:41.394 { 00:19:41.394 "name": "spare", 00:19:41.394 "uuid": "e176850d-1cfa-54c1-9c56-29b536efdc4f", 00:19:41.394 "is_configured": true, 00:19:41.394 "data_offset": 0, 00:19:41.394 "data_size": 65536 00:19:41.394 }, 00:19:41.394 { 00:19:41.394 "name": "BaseBdev2", 00:19:41.394 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:41.394 "is_configured": true, 00:19:41.394 "data_offset": 0, 00:19:41.394 "data_size": 65536 00:19:41.394 }, 00:19:41.394 { 00:19:41.394 "name": "BaseBdev3", 00:19:41.394 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:41.394 "is_configured": true, 00:19:41.394 "data_offset": 0, 00:19:41.394 "data_size": 65536 00:19:41.394 }, 00:19:41.394 { 00:19:41.394 "name": "BaseBdev4", 00:19:41.394 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:41.394 "is_configured": true, 00:19:41.394 "data_offset": 0, 00:19:41.394 "data_size": 65536 00:19:41.394 } 00:19:41.394 ] 00:19:41.394 }' 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:41.394 15:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:42.332 15:46:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:42.332 15:46:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.332 15:46:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.332 15:46:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:42.332 15:46:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:42.332 15:46:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.332 15:46:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.332 15:46:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.332 15:46:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.332 15:46:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.591 15:46:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.591 15:46:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.591 "name": "raid_bdev1", 00:19:42.591 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:42.591 "strip_size_kb": 64, 00:19:42.591 "state": "online", 00:19:42.591 "raid_level": "raid5f", 00:19:42.591 "superblock": false, 00:19:42.591 "num_base_bdevs": 4, 00:19:42.591 "num_base_bdevs_discovered": 4, 00:19:42.591 "num_base_bdevs_operational": 4, 00:19:42.591 "process": { 00:19:42.591 "type": "rebuild", 00:19:42.591 "target": "spare", 00:19:42.591 "progress": { 00:19:42.591 "blocks": 193920, 00:19:42.591 "percent": 98 00:19:42.591 } 00:19:42.591 }, 00:19:42.591 "base_bdevs_list": [ 00:19:42.591 { 00:19:42.591 "name": "spare", 00:19:42.591 "uuid": "e176850d-1cfa-54c1-9c56-29b536efdc4f", 00:19:42.591 "is_configured": true, 00:19:42.591 "data_offset": 0, 00:19:42.591 "data_size": 65536 00:19:42.591 }, 00:19:42.591 { 00:19:42.591 "name": "BaseBdev2", 00:19:42.591 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:42.591 "is_configured": true, 00:19:42.591 "data_offset": 0, 00:19:42.591 "data_size": 65536 00:19:42.591 }, 00:19:42.591 { 00:19:42.591 "name": "BaseBdev3", 00:19:42.591 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:42.591 "is_configured": true, 00:19:42.591 "data_offset": 0, 00:19:42.591 "data_size": 65536 00:19:42.591 }, 00:19:42.591 { 00:19:42.591 "name": "BaseBdev4", 00:19:42.591 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:42.591 "is_configured": true, 00:19:42.591 "data_offset": 0, 00:19:42.591 "data_size": 65536 00:19:42.591 } 00:19:42.591 ] 00:19:42.591 }' 00:19:42.591 15:46:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.591 15:46:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.591 15:46:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.591 15:46:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.591 15:46:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:42.591 [2024-12-06 15:46:25.759465] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:42.591 [2024-12-06 15:46:25.759564] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:42.591 [2024-12-06 15:46:25.759630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.528 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:43.529 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.529 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.529 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:43.529 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:43.529 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.529 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.529 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.529 15:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.529 15:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.529 15:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.529 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.529 "name": "raid_bdev1", 00:19:43.529 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:43.529 "strip_size_kb": 64, 00:19:43.529 "state": "online", 00:19:43.529 "raid_level": "raid5f", 00:19:43.529 "superblock": false, 00:19:43.529 "num_base_bdevs": 4, 00:19:43.529 "num_base_bdevs_discovered": 4, 00:19:43.529 "num_base_bdevs_operational": 4, 00:19:43.529 "base_bdevs_list": [ 00:19:43.529 { 00:19:43.529 "name": "spare", 00:19:43.529 "uuid": "e176850d-1cfa-54c1-9c56-29b536efdc4f", 00:19:43.529 "is_configured": true, 00:19:43.529 "data_offset": 0, 00:19:43.529 "data_size": 65536 00:19:43.529 }, 00:19:43.529 { 00:19:43.529 "name": "BaseBdev2", 00:19:43.529 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:43.529 "is_configured": true, 00:19:43.529 "data_offset": 0, 00:19:43.529 "data_size": 65536 00:19:43.529 }, 00:19:43.529 { 00:19:43.529 "name": "BaseBdev3", 00:19:43.529 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:43.529 "is_configured": true, 00:19:43.529 "data_offset": 0, 00:19:43.529 "data_size": 65536 00:19:43.529 }, 00:19:43.529 { 00:19:43.529 "name": "BaseBdev4", 00:19:43.529 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:43.529 "is_configured": true, 00:19:43.529 "data_offset": 0, 00:19:43.529 "data_size": 65536 00:19:43.529 } 00:19:43.529 ] 00:19:43.529 }' 00:19:43.529 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.788 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:43.788 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.788 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:43.788 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:43.788 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:43.788 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.788 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:43.788 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:43.788 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.788 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.788 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.788 15:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.788 15:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.788 15:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.788 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.788 "name": "raid_bdev1", 00:19:43.788 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:43.788 "strip_size_kb": 64, 00:19:43.788 "state": "online", 00:19:43.788 "raid_level": "raid5f", 00:19:43.788 "superblock": false, 00:19:43.788 "num_base_bdevs": 4, 00:19:43.788 "num_base_bdevs_discovered": 4, 00:19:43.788 "num_base_bdevs_operational": 4, 00:19:43.788 "base_bdevs_list": [ 00:19:43.788 { 00:19:43.788 "name": "spare", 00:19:43.788 "uuid": "e176850d-1cfa-54c1-9c56-29b536efdc4f", 00:19:43.788 "is_configured": true, 00:19:43.788 "data_offset": 0, 00:19:43.788 "data_size": 65536 00:19:43.788 }, 00:19:43.788 { 00:19:43.788 "name": "BaseBdev2", 00:19:43.788 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:43.788 "is_configured": true, 00:19:43.788 "data_offset": 0, 00:19:43.788 "data_size": 65536 00:19:43.788 }, 00:19:43.788 { 00:19:43.788 "name": "BaseBdev3", 00:19:43.788 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:43.788 "is_configured": true, 00:19:43.788 "data_offset": 0, 00:19:43.788 "data_size": 65536 00:19:43.788 }, 00:19:43.789 { 00:19:43.789 "name": "BaseBdev4", 00:19:43.789 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:43.789 "is_configured": true, 00:19:43.789 "data_offset": 0, 00:19:43.789 "data_size": 65536 00:19:43.789 } 00:19:43.789 ] 00:19:43.789 }' 00:19:43.789 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.789 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:43.789 15:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.789 "name": "raid_bdev1", 00:19:43.789 "uuid": "8122ed2c-acce-4e67-9277-ea6411864e3f", 00:19:43.789 "strip_size_kb": 64, 00:19:43.789 "state": "online", 00:19:43.789 "raid_level": "raid5f", 00:19:43.789 "superblock": false, 00:19:43.789 "num_base_bdevs": 4, 00:19:43.789 "num_base_bdevs_discovered": 4, 00:19:43.789 "num_base_bdevs_operational": 4, 00:19:43.789 "base_bdevs_list": [ 00:19:43.789 { 00:19:43.789 "name": "spare", 00:19:43.789 "uuid": "e176850d-1cfa-54c1-9c56-29b536efdc4f", 00:19:43.789 "is_configured": true, 00:19:43.789 "data_offset": 0, 00:19:43.789 "data_size": 65536 00:19:43.789 }, 00:19:43.789 { 00:19:43.789 "name": "BaseBdev2", 00:19:43.789 "uuid": "863116ce-7d20-5a2c-82d6-b3d852f6fbb2", 00:19:43.789 "is_configured": true, 00:19:43.789 "data_offset": 0, 00:19:43.789 "data_size": 65536 00:19:43.789 }, 00:19:43.789 { 00:19:43.789 "name": "BaseBdev3", 00:19:43.789 "uuid": "69ebc814-bfeb-5477-bcc2-64a8218c1c89", 00:19:43.789 "is_configured": true, 00:19:43.789 "data_offset": 0, 00:19:43.789 "data_size": 65536 00:19:43.789 }, 00:19:43.789 { 00:19:43.789 "name": "BaseBdev4", 00:19:43.789 "uuid": "e84e3098-053a-5bda-a87a-edccfc6c6230", 00:19:43.789 "is_configured": true, 00:19:43.789 "data_offset": 0, 00:19:43.789 "data_size": 65536 00:19:43.789 } 00:19:43.789 ] 00:19:43.789 }' 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.789 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.357 [2024-12-06 15:46:27.412045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.357 [2024-12-06 15:46:27.412217] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:44.357 [2024-12-06 15:46:27.412464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.357 [2024-12-06 15:46:27.412695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.357 [2024-12-06 15:46:27.412822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:44.357 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:44.615 /dev/nbd0 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:44.615 1+0 records in 00:19:44.615 1+0 records out 00:19:44.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309963 s, 13.2 MB/s 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:44.615 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:44.874 /dev/nbd1 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:44.874 1+0 records in 00:19:44.874 1+0 records out 00:19:44.874 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431961 s, 9.5 MB/s 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:44.874 15:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:45.133 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:45.391 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:45.391 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:45.391 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:45.391 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:45.391 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:45.392 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:45.392 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:45.392 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:45.392 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:45.392 15:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84600 00:19:45.392 15:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84600 ']' 00:19:45.392 15:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84600 00:19:45.392 15:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:19:45.392 15:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.392 15:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84600 00:19:45.657 killing process with pid 84600 00:19:45.657 Received shutdown signal, test time was about 60.000000 seconds 00:19:45.657 00:19:45.657 Latency(us) 00:19:45.657 [2024-12-06T15:46:28.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.657 [2024-12-06T15:46:28.952Z] =================================================================================================================== 00:19:45.657 [2024-12-06T15:46:28.952Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:45.657 15:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:45.657 15:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:45.657 15:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84600' 00:19:45.657 15:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84600 00:19:45.657 [2024-12-06 15:46:28.697915] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:45.657 15:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84600 00:19:46.229 [2024-12-06 15:46:29.218416] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:47.167 15:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:47.167 00:19:47.167 real 0m20.098s 00:19:47.167 user 0m23.519s 00:19:47.167 sys 0m2.667s 00:19:47.167 ************************************ 00:19:47.167 END TEST raid5f_rebuild_test 00:19:47.167 ************************************ 00:19:47.167 15:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.167 15:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.426 15:46:30 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:19:47.426 15:46:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:47.426 15:46:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.426 15:46:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:47.426 ************************************ 00:19:47.426 START TEST raid5f_rebuild_test_sb 00:19:47.426 ************************************ 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85123 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85123 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:47.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85123 ']' 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.426 15:46:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.426 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:47.426 Zero copy mechanism will not be used. 00:19:47.426 [2024-12-06 15:46:30.624707] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:19:47.426 [2024-12-06 15:46:30.624851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85123 ] 00:19:47.685 [2024-12-06 15:46:30.811102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.685 [2024-12-06 15:46:30.949346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.956 [2024-12-06 15:46:31.184930] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:47.956 [2024-12-06 15:46:31.184970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:48.219 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.219 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:48.219 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:48.219 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:48.219 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.219 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.219 BaseBdev1_malloc 00:19:48.219 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.219 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:48.219 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.219 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.219 [2024-12-06 15:46:31.507534] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:48.219 [2024-12-06 15:46:31.507610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.219 [2024-12-06 15:46:31.507640] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:48.219 [2024-12-06 15:46:31.507656] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.219 [2024-12-06 15:46:31.510329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.219 [2024-12-06 15:46:31.510375] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:48.477 BaseBdev1 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 BaseBdev2_malloc 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 [2024-12-06 15:46:31.568038] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:48.477 [2024-12-06 15:46:31.568108] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.477 [2024-12-06 15:46:31.568137] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:48.477 [2024-12-06 15:46:31.568152] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.477 [2024-12-06 15:46:31.570860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.477 [2024-12-06 15:46:31.570902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:48.477 BaseBdev2 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 BaseBdev3_malloc 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 [2024-12-06 15:46:31.638433] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:48.477 [2024-12-06 15:46:31.638492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.477 [2024-12-06 15:46:31.638539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:48.477 [2024-12-06 15:46:31.638555] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.477 [2024-12-06 15:46:31.641179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.477 [2024-12-06 15:46:31.641222] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:48.477 BaseBdev3 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 BaseBdev4_malloc 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 [2024-12-06 15:46:31.695918] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:48.477 [2024-12-06 15:46:31.696135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.477 [2024-12-06 15:46:31.696166] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:48.477 [2024-12-06 15:46:31.696182] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.477 [2024-12-06 15:46:31.698910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.477 [2024-12-06 15:46:31.698958] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:48.477 BaseBdev4 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 spare_malloc 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 spare_delay 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.477 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 [2024-12-06 15:46:31.768874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:48.477 [2024-12-06 15:46:31.768931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.477 [2024-12-06 15:46:31.768953] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:48.477 [2024-12-06 15:46:31.768968] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.735 [2024-12-06 15:46:31.771655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.735 [2024-12-06 15:46:31.771696] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:48.735 spare 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.735 [2024-12-06 15:46:31.780931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:48.735 [2024-12-06 15:46:31.783390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:48.735 [2024-12-06 15:46:31.783453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:48.735 [2024-12-06 15:46:31.783527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:48.735 [2024-12-06 15:46:31.783738] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:48.735 [2024-12-06 15:46:31.783753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:48.735 [2024-12-06 15:46:31.784032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:48.735 [2024-12-06 15:46:31.791526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:48.735 [2024-12-06 15:46:31.791659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:48.735 [2024-12-06 15:46:31.791966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.735 "name": "raid_bdev1", 00:19:48.735 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:19:48.735 "strip_size_kb": 64, 00:19:48.735 "state": "online", 00:19:48.735 "raid_level": "raid5f", 00:19:48.735 "superblock": true, 00:19:48.735 "num_base_bdevs": 4, 00:19:48.735 "num_base_bdevs_discovered": 4, 00:19:48.735 "num_base_bdevs_operational": 4, 00:19:48.735 "base_bdevs_list": [ 00:19:48.735 { 00:19:48.735 "name": "BaseBdev1", 00:19:48.735 "uuid": "7e0baacc-32c6-5fdb-aff7-98bd8200284e", 00:19:48.735 "is_configured": true, 00:19:48.735 "data_offset": 2048, 00:19:48.735 "data_size": 63488 00:19:48.735 }, 00:19:48.735 { 00:19:48.735 "name": "BaseBdev2", 00:19:48.735 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:19:48.735 "is_configured": true, 00:19:48.735 "data_offset": 2048, 00:19:48.735 "data_size": 63488 00:19:48.735 }, 00:19:48.735 { 00:19:48.735 "name": "BaseBdev3", 00:19:48.735 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:19:48.735 "is_configured": true, 00:19:48.735 "data_offset": 2048, 00:19:48.735 "data_size": 63488 00:19:48.735 }, 00:19:48.735 { 00:19:48.735 "name": "BaseBdev4", 00:19:48.735 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:19:48.735 "is_configured": true, 00:19:48.735 "data_offset": 2048, 00:19:48.735 "data_size": 63488 00:19:48.735 } 00:19:48.735 ] 00:19:48.735 }' 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.735 15:46:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.994 [2024-12-06 15:46:32.201189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:48.994 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:49.252 [2024-12-06 15:46:32.460742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:49.252 /dev/nbd0 00:19:49.252 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:49.252 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:49.252 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:49.253 1+0 records in 00:19:49.253 1+0 records out 00:19:49.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361252 s, 11.3 MB/s 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:49.253 15:46:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:19:49.819 496+0 records in 00:19:49.819 496+0 records out 00:19:49.819 97517568 bytes (98 MB, 93 MiB) copied, 0.502068 s, 194 MB/s 00:19:49.819 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:49.819 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:49.819 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:49.819 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:49.819 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:49.819 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.819 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:50.078 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:50.079 [2024-12-06 15:46:33.271846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.079 [2024-12-06 15:46:33.298301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.079 "name": "raid_bdev1", 00:19:50.079 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:19:50.079 "strip_size_kb": 64, 00:19:50.079 "state": "online", 00:19:50.079 "raid_level": "raid5f", 00:19:50.079 "superblock": true, 00:19:50.079 "num_base_bdevs": 4, 00:19:50.079 "num_base_bdevs_discovered": 3, 00:19:50.079 "num_base_bdevs_operational": 3, 00:19:50.079 "base_bdevs_list": [ 00:19:50.079 { 00:19:50.079 "name": null, 00:19:50.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.079 "is_configured": false, 00:19:50.079 "data_offset": 0, 00:19:50.079 "data_size": 63488 00:19:50.079 }, 00:19:50.079 { 00:19:50.079 "name": "BaseBdev2", 00:19:50.079 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:19:50.079 "is_configured": true, 00:19:50.079 "data_offset": 2048, 00:19:50.079 "data_size": 63488 00:19:50.079 }, 00:19:50.079 { 00:19:50.079 "name": "BaseBdev3", 00:19:50.079 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:19:50.079 "is_configured": true, 00:19:50.079 "data_offset": 2048, 00:19:50.079 "data_size": 63488 00:19:50.079 }, 00:19:50.079 { 00:19:50.079 "name": "BaseBdev4", 00:19:50.079 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:19:50.079 "is_configured": true, 00:19:50.079 "data_offset": 2048, 00:19:50.079 "data_size": 63488 00:19:50.079 } 00:19:50.079 ] 00:19:50.079 }' 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.079 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.647 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:50.647 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.647 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.647 [2024-12-06 15:46:33.693785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.647 [2024-12-06 15:46:33.711142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:19:50.647 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.647 15:46:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:50.647 [2024-12-06 15:46:33.721497] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.585 "name": "raid_bdev1", 00:19:51.585 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:19:51.585 "strip_size_kb": 64, 00:19:51.585 "state": "online", 00:19:51.585 "raid_level": "raid5f", 00:19:51.585 "superblock": true, 00:19:51.585 "num_base_bdevs": 4, 00:19:51.585 "num_base_bdevs_discovered": 4, 00:19:51.585 "num_base_bdevs_operational": 4, 00:19:51.585 "process": { 00:19:51.585 "type": "rebuild", 00:19:51.585 "target": "spare", 00:19:51.585 "progress": { 00:19:51.585 "blocks": 19200, 00:19:51.585 "percent": 10 00:19:51.585 } 00:19:51.585 }, 00:19:51.585 "base_bdevs_list": [ 00:19:51.585 { 00:19:51.585 "name": "spare", 00:19:51.585 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:19:51.585 "is_configured": true, 00:19:51.585 "data_offset": 2048, 00:19:51.585 "data_size": 63488 00:19:51.585 }, 00:19:51.585 { 00:19:51.585 "name": "BaseBdev2", 00:19:51.585 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:19:51.585 "is_configured": true, 00:19:51.585 "data_offset": 2048, 00:19:51.585 "data_size": 63488 00:19:51.585 }, 00:19:51.585 { 00:19:51.585 "name": "BaseBdev3", 00:19:51.585 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:19:51.585 "is_configured": true, 00:19:51.585 "data_offset": 2048, 00:19:51.585 "data_size": 63488 00:19:51.585 }, 00:19:51.585 { 00:19:51.585 "name": "BaseBdev4", 00:19:51.585 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:19:51.585 "is_configured": true, 00:19:51.585 "data_offset": 2048, 00:19:51.585 "data_size": 63488 00:19:51.585 } 00:19:51.585 ] 00:19:51.585 }' 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.585 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.585 [2024-12-06 15:46:34.840806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:51.845 [2024-12-06 15:46:34.929995] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:51.845 [2024-12-06 15:46:34.930076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.845 [2024-12-06 15:46:34.930097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:51.845 [2024-12-06 15:46:34.930110] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:51.845 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.845 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:51.845 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.845 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.845 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:51.845 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:51.845 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:51.845 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.845 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.845 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.845 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.845 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.845 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.846 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.846 15:46:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.846 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.846 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.846 "name": "raid_bdev1", 00:19:51.846 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:19:51.846 "strip_size_kb": 64, 00:19:51.846 "state": "online", 00:19:51.846 "raid_level": "raid5f", 00:19:51.846 "superblock": true, 00:19:51.846 "num_base_bdevs": 4, 00:19:51.846 "num_base_bdevs_discovered": 3, 00:19:51.846 "num_base_bdevs_operational": 3, 00:19:51.846 "base_bdevs_list": [ 00:19:51.846 { 00:19:51.846 "name": null, 00:19:51.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.846 "is_configured": false, 00:19:51.846 "data_offset": 0, 00:19:51.846 "data_size": 63488 00:19:51.846 }, 00:19:51.846 { 00:19:51.846 "name": "BaseBdev2", 00:19:51.846 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:19:51.846 "is_configured": true, 00:19:51.846 "data_offset": 2048, 00:19:51.846 "data_size": 63488 00:19:51.846 }, 00:19:51.846 { 00:19:51.846 "name": "BaseBdev3", 00:19:51.846 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:19:51.846 "is_configured": true, 00:19:51.846 "data_offset": 2048, 00:19:51.846 "data_size": 63488 00:19:51.846 }, 00:19:51.846 { 00:19:51.846 "name": "BaseBdev4", 00:19:51.846 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:19:51.846 "is_configured": true, 00:19:51.846 "data_offset": 2048, 00:19:51.846 "data_size": 63488 00:19:51.846 } 00:19:51.846 ] 00:19:51.846 }' 00:19:51.846 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.846 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.104 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:52.104 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.104 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:52.104 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:52.104 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.104 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.104 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.104 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.104 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.362 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.362 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.362 "name": "raid_bdev1", 00:19:52.362 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:19:52.362 "strip_size_kb": 64, 00:19:52.362 "state": "online", 00:19:52.362 "raid_level": "raid5f", 00:19:52.362 "superblock": true, 00:19:52.362 "num_base_bdevs": 4, 00:19:52.362 "num_base_bdevs_discovered": 3, 00:19:52.362 "num_base_bdevs_operational": 3, 00:19:52.362 "base_bdevs_list": [ 00:19:52.362 { 00:19:52.362 "name": null, 00:19:52.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.362 "is_configured": false, 00:19:52.362 "data_offset": 0, 00:19:52.362 "data_size": 63488 00:19:52.362 }, 00:19:52.362 { 00:19:52.362 "name": "BaseBdev2", 00:19:52.362 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:19:52.362 "is_configured": true, 00:19:52.362 "data_offset": 2048, 00:19:52.362 "data_size": 63488 00:19:52.362 }, 00:19:52.362 { 00:19:52.362 "name": "BaseBdev3", 00:19:52.362 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:19:52.362 "is_configured": true, 00:19:52.362 "data_offset": 2048, 00:19:52.362 "data_size": 63488 00:19:52.362 }, 00:19:52.362 { 00:19:52.362 "name": "BaseBdev4", 00:19:52.362 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:19:52.362 "is_configured": true, 00:19:52.362 "data_offset": 2048, 00:19:52.362 "data_size": 63488 00:19:52.362 } 00:19:52.362 ] 00:19:52.362 }' 00:19:52.362 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.362 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:52.362 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.362 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:52.362 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:52.362 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.362 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.362 [2024-12-06 15:46:35.528712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:52.362 [2024-12-06 15:46:35.544701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:19:52.362 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.362 15:46:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:52.362 [2024-12-06 15:46:35.554621] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:53.298 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.298 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.298 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.298 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.298 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.298 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.298 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.298 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.298 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.298 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.557 "name": "raid_bdev1", 00:19:53.557 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:19:53.557 "strip_size_kb": 64, 00:19:53.557 "state": "online", 00:19:53.557 "raid_level": "raid5f", 00:19:53.557 "superblock": true, 00:19:53.557 "num_base_bdevs": 4, 00:19:53.557 "num_base_bdevs_discovered": 4, 00:19:53.557 "num_base_bdevs_operational": 4, 00:19:53.557 "process": { 00:19:53.557 "type": "rebuild", 00:19:53.557 "target": "spare", 00:19:53.557 "progress": { 00:19:53.557 "blocks": 19200, 00:19:53.557 "percent": 10 00:19:53.557 } 00:19:53.557 }, 00:19:53.557 "base_bdevs_list": [ 00:19:53.557 { 00:19:53.557 "name": "spare", 00:19:53.557 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:19:53.557 "is_configured": true, 00:19:53.557 "data_offset": 2048, 00:19:53.557 "data_size": 63488 00:19:53.557 }, 00:19:53.557 { 00:19:53.557 "name": "BaseBdev2", 00:19:53.557 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:19:53.557 "is_configured": true, 00:19:53.557 "data_offset": 2048, 00:19:53.557 "data_size": 63488 00:19:53.557 }, 00:19:53.557 { 00:19:53.557 "name": "BaseBdev3", 00:19:53.557 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:19:53.557 "is_configured": true, 00:19:53.557 "data_offset": 2048, 00:19:53.557 "data_size": 63488 00:19:53.557 }, 00:19:53.557 { 00:19:53.557 "name": "BaseBdev4", 00:19:53.557 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:19:53.557 "is_configured": true, 00:19:53.557 "data_offset": 2048, 00:19:53.557 "data_size": 63488 00:19:53.557 } 00:19:53.557 ] 00:19:53.557 }' 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:53.557 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=644 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.557 "name": "raid_bdev1", 00:19:53.557 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:19:53.557 "strip_size_kb": 64, 00:19:53.557 "state": "online", 00:19:53.557 "raid_level": "raid5f", 00:19:53.557 "superblock": true, 00:19:53.557 "num_base_bdevs": 4, 00:19:53.557 "num_base_bdevs_discovered": 4, 00:19:53.557 "num_base_bdevs_operational": 4, 00:19:53.557 "process": { 00:19:53.557 "type": "rebuild", 00:19:53.557 "target": "spare", 00:19:53.557 "progress": { 00:19:53.557 "blocks": 21120, 00:19:53.557 "percent": 11 00:19:53.557 } 00:19:53.557 }, 00:19:53.557 "base_bdevs_list": [ 00:19:53.557 { 00:19:53.557 "name": "spare", 00:19:53.557 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:19:53.557 "is_configured": true, 00:19:53.557 "data_offset": 2048, 00:19:53.557 "data_size": 63488 00:19:53.557 }, 00:19:53.557 { 00:19:53.557 "name": "BaseBdev2", 00:19:53.557 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:19:53.557 "is_configured": true, 00:19:53.557 "data_offset": 2048, 00:19:53.557 "data_size": 63488 00:19:53.557 }, 00:19:53.557 { 00:19:53.557 "name": "BaseBdev3", 00:19:53.557 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:19:53.557 "is_configured": true, 00:19:53.557 "data_offset": 2048, 00:19:53.557 "data_size": 63488 00:19:53.557 }, 00:19:53.557 { 00:19:53.557 "name": "BaseBdev4", 00:19:53.557 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:19:53.557 "is_configured": true, 00:19:53.557 "data_offset": 2048, 00:19:53.557 "data_size": 63488 00:19:53.557 } 00:19:53.557 ] 00:19:53.557 }' 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.557 15:46:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.934 "name": "raid_bdev1", 00:19:54.934 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:19:54.934 "strip_size_kb": 64, 00:19:54.934 "state": "online", 00:19:54.934 "raid_level": "raid5f", 00:19:54.934 "superblock": true, 00:19:54.934 "num_base_bdevs": 4, 00:19:54.934 "num_base_bdevs_discovered": 4, 00:19:54.934 "num_base_bdevs_operational": 4, 00:19:54.934 "process": { 00:19:54.934 "type": "rebuild", 00:19:54.934 "target": "spare", 00:19:54.934 "progress": { 00:19:54.934 "blocks": 42240, 00:19:54.934 "percent": 22 00:19:54.934 } 00:19:54.934 }, 00:19:54.934 "base_bdevs_list": [ 00:19:54.934 { 00:19:54.934 "name": "spare", 00:19:54.934 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:19:54.934 "is_configured": true, 00:19:54.934 "data_offset": 2048, 00:19:54.934 "data_size": 63488 00:19:54.934 }, 00:19:54.934 { 00:19:54.934 "name": "BaseBdev2", 00:19:54.934 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:19:54.934 "is_configured": true, 00:19:54.934 "data_offset": 2048, 00:19:54.934 "data_size": 63488 00:19:54.934 }, 00:19:54.934 { 00:19:54.934 "name": "BaseBdev3", 00:19:54.934 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:19:54.934 "is_configured": true, 00:19:54.934 "data_offset": 2048, 00:19:54.934 "data_size": 63488 00:19:54.934 }, 00:19:54.934 { 00:19:54.934 "name": "BaseBdev4", 00:19:54.934 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:19:54.934 "is_configured": true, 00:19:54.934 "data_offset": 2048, 00:19:54.934 "data_size": 63488 00:19:54.934 } 00:19:54.934 ] 00:19:54.934 }' 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.934 15:46:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:55.871 15:46:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:55.871 15:46:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.871 15:46:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.871 15:46:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:55.871 15:46:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:55.871 15:46:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.871 15:46:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.871 15:46:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.871 15:46:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.871 15:46:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.871 15:46:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.871 15:46:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.871 "name": "raid_bdev1", 00:19:55.871 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:19:55.871 "strip_size_kb": 64, 00:19:55.871 "state": "online", 00:19:55.871 "raid_level": "raid5f", 00:19:55.871 "superblock": true, 00:19:55.871 "num_base_bdevs": 4, 00:19:55.871 "num_base_bdevs_discovered": 4, 00:19:55.871 "num_base_bdevs_operational": 4, 00:19:55.871 "process": { 00:19:55.871 "type": "rebuild", 00:19:55.871 "target": "spare", 00:19:55.871 "progress": { 00:19:55.871 "blocks": 65280, 00:19:55.871 "percent": 34 00:19:55.871 } 00:19:55.871 }, 00:19:55.871 "base_bdevs_list": [ 00:19:55.871 { 00:19:55.871 "name": "spare", 00:19:55.871 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:19:55.871 "is_configured": true, 00:19:55.871 "data_offset": 2048, 00:19:55.871 "data_size": 63488 00:19:55.871 }, 00:19:55.871 { 00:19:55.871 "name": "BaseBdev2", 00:19:55.871 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:19:55.871 "is_configured": true, 00:19:55.871 "data_offset": 2048, 00:19:55.871 "data_size": 63488 00:19:55.871 }, 00:19:55.871 { 00:19:55.871 "name": "BaseBdev3", 00:19:55.871 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:19:55.871 "is_configured": true, 00:19:55.871 "data_offset": 2048, 00:19:55.871 "data_size": 63488 00:19:55.871 }, 00:19:55.871 { 00:19:55.871 "name": "BaseBdev4", 00:19:55.871 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:19:55.871 "is_configured": true, 00:19:55.871 "data_offset": 2048, 00:19:55.871 "data_size": 63488 00:19:55.871 } 00:19:55.871 ] 00:19:55.871 }' 00:19:55.871 15:46:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.871 15:46:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.871 15:46:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.871 15:46:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.871 15:46:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.248 "name": "raid_bdev1", 00:19:57.248 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:19:57.248 "strip_size_kb": 64, 00:19:57.248 "state": "online", 00:19:57.248 "raid_level": "raid5f", 00:19:57.248 "superblock": true, 00:19:57.248 "num_base_bdevs": 4, 00:19:57.248 "num_base_bdevs_discovered": 4, 00:19:57.248 "num_base_bdevs_operational": 4, 00:19:57.248 "process": { 00:19:57.248 "type": "rebuild", 00:19:57.248 "target": "spare", 00:19:57.248 "progress": { 00:19:57.248 "blocks": 86400, 00:19:57.248 "percent": 45 00:19:57.248 } 00:19:57.248 }, 00:19:57.248 "base_bdevs_list": [ 00:19:57.248 { 00:19:57.248 "name": "spare", 00:19:57.248 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:19:57.248 "is_configured": true, 00:19:57.248 "data_offset": 2048, 00:19:57.248 "data_size": 63488 00:19:57.248 }, 00:19:57.248 { 00:19:57.248 "name": "BaseBdev2", 00:19:57.248 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:19:57.248 "is_configured": true, 00:19:57.248 "data_offset": 2048, 00:19:57.248 "data_size": 63488 00:19:57.248 }, 00:19:57.248 { 00:19:57.248 "name": "BaseBdev3", 00:19:57.248 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:19:57.248 "is_configured": true, 00:19:57.248 "data_offset": 2048, 00:19:57.248 "data_size": 63488 00:19:57.248 }, 00:19:57.248 { 00:19:57.248 "name": "BaseBdev4", 00:19:57.248 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:19:57.248 "is_configured": true, 00:19:57.248 "data_offset": 2048, 00:19:57.248 "data_size": 63488 00:19:57.248 } 00:19:57.248 ] 00:19:57.248 }' 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:57.248 15:46:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.184 "name": "raid_bdev1", 00:19:58.184 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:19:58.184 "strip_size_kb": 64, 00:19:58.184 "state": "online", 00:19:58.184 "raid_level": "raid5f", 00:19:58.184 "superblock": true, 00:19:58.184 "num_base_bdevs": 4, 00:19:58.184 "num_base_bdevs_discovered": 4, 00:19:58.184 "num_base_bdevs_operational": 4, 00:19:58.184 "process": { 00:19:58.184 "type": "rebuild", 00:19:58.184 "target": "spare", 00:19:58.184 "progress": { 00:19:58.184 "blocks": 107520, 00:19:58.184 "percent": 56 00:19:58.184 } 00:19:58.184 }, 00:19:58.184 "base_bdevs_list": [ 00:19:58.184 { 00:19:58.184 "name": "spare", 00:19:58.184 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:19:58.184 "is_configured": true, 00:19:58.184 "data_offset": 2048, 00:19:58.184 "data_size": 63488 00:19:58.184 }, 00:19:58.184 { 00:19:58.184 "name": "BaseBdev2", 00:19:58.184 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:19:58.184 "is_configured": true, 00:19:58.184 "data_offset": 2048, 00:19:58.184 "data_size": 63488 00:19:58.184 }, 00:19:58.184 { 00:19:58.184 "name": "BaseBdev3", 00:19:58.184 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:19:58.184 "is_configured": true, 00:19:58.184 "data_offset": 2048, 00:19:58.184 "data_size": 63488 00:19:58.184 }, 00:19:58.184 { 00:19:58.184 "name": "BaseBdev4", 00:19:58.184 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:19:58.184 "is_configured": true, 00:19:58.184 "data_offset": 2048, 00:19:58.184 "data_size": 63488 00:19:58.184 } 00:19:58.184 ] 00:19:58.184 }' 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.184 15:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:59.119 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:59.119 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.119 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.119 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:59.119 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:59.119 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.119 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.119 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.119 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.119 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.378 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.378 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:59.378 "name": "raid_bdev1", 00:19:59.378 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:19:59.378 "strip_size_kb": 64, 00:19:59.378 "state": "online", 00:19:59.378 "raid_level": "raid5f", 00:19:59.378 "superblock": true, 00:19:59.378 "num_base_bdevs": 4, 00:19:59.378 "num_base_bdevs_discovered": 4, 00:19:59.378 "num_base_bdevs_operational": 4, 00:19:59.378 "process": { 00:19:59.378 "type": "rebuild", 00:19:59.378 "target": "spare", 00:19:59.378 "progress": { 00:19:59.378 "blocks": 128640, 00:19:59.378 "percent": 67 00:19:59.378 } 00:19:59.378 }, 00:19:59.378 "base_bdevs_list": [ 00:19:59.378 { 00:19:59.378 "name": "spare", 00:19:59.378 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:19:59.378 "is_configured": true, 00:19:59.378 "data_offset": 2048, 00:19:59.378 "data_size": 63488 00:19:59.378 }, 00:19:59.378 { 00:19:59.378 "name": "BaseBdev2", 00:19:59.378 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:19:59.378 "is_configured": true, 00:19:59.378 "data_offset": 2048, 00:19:59.378 "data_size": 63488 00:19:59.378 }, 00:19:59.378 { 00:19:59.378 "name": "BaseBdev3", 00:19:59.378 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:19:59.378 "is_configured": true, 00:19:59.378 "data_offset": 2048, 00:19:59.378 "data_size": 63488 00:19:59.378 }, 00:19:59.378 { 00:19:59.378 "name": "BaseBdev4", 00:19:59.378 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:19:59.378 "is_configured": true, 00:19:59.378 "data_offset": 2048, 00:19:59.378 "data_size": 63488 00:19:59.378 } 00:19:59.378 ] 00:19:59.378 }' 00:19:59.378 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:59.378 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:59.378 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:59.378 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.378 15:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:00.358 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:00.358 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.358 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.358 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:00.358 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:00.359 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.359 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.359 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.359 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.359 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.359 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.359 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.359 "name": "raid_bdev1", 00:20:00.359 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:00.359 "strip_size_kb": 64, 00:20:00.359 "state": "online", 00:20:00.359 "raid_level": "raid5f", 00:20:00.359 "superblock": true, 00:20:00.359 "num_base_bdevs": 4, 00:20:00.359 "num_base_bdevs_discovered": 4, 00:20:00.359 "num_base_bdevs_operational": 4, 00:20:00.359 "process": { 00:20:00.359 "type": "rebuild", 00:20:00.359 "target": "spare", 00:20:00.359 "progress": { 00:20:00.359 "blocks": 151680, 00:20:00.359 "percent": 79 00:20:00.359 } 00:20:00.359 }, 00:20:00.359 "base_bdevs_list": [ 00:20:00.359 { 00:20:00.359 "name": "spare", 00:20:00.359 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:20:00.359 "is_configured": true, 00:20:00.359 "data_offset": 2048, 00:20:00.359 "data_size": 63488 00:20:00.359 }, 00:20:00.359 { 00:20:00.359 "name": "BaseBdev2", 00:20:00.359 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:00.359 "is_configured": true, 00:20:00.359 "data_offset": 2048, 00:20:00.359 "data_size": 63488 00:20:00.359 }, 00:20:00.359 { 00:20:00.359 "name": "BaseBdev3", 00:20:00.359 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:00.359 "is_configured": true, 00:20:00.359 "data_offset": 2048, 00:20:00.359 "data_size": 63488 00:20:00.359 }, 00:20:00.359 { 00:20:00.359 "name": "BaseBdev4", 00:20:00.359 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:00.359 "is_configured": true, 00:20:00.359 "data_offset": 2048, 00:20:00.359 "data_size": 63488 00:20:00.359 } 00:20:00.359 ] 00:20:00.359 }' 00:20:00.359 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.359 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.359 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.359 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.359 15:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.733 "name": "raid_bdev1", 00:20:01.733 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:01.733 "strip_size_kb": 64, 00:20:01.733 "state": "online", 00:20:01.733 "raid_level": "raid5f", 00:20:01.733 "superblock": true, 00:20:01.733 "num_base_bdevs": 4, 00:20:01.733 "num_base_bdevs_discovered": 4, 00:20:01.733 "num_base_bdevs_operational": 4, 00:20:01.733 "process": { 00:20:01.733 "type": "rebuild", 00:20:01.733 "target": "spare", 00:20:01.733 "progress": { 00:20:01.733 "blocks": 172800, 00:20:01.733 "percent": 90 00:20:01.733 } 00:20:01.733 }, 00:20:01.733 "base_bdevs_list": [ 00:20:01.733 { 00:20:01.733 "name": "spare", 00:20:01.733 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:20:01.733 "is_configured": true, 00:20:01.733 "data_offset": 2048, 00:20:01.733 "data_size": 63488 00:20:01.733 }, 00:20:01.733 { 00:20:01.733 "name": "BaseBdev2", 00:20:01.733 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:01.733 "is_configured": true, 00:20:01.733 "data_offset": 2048, 00:20:01.733 "data_size": 63488 00:20:01.733 }, 00:20:01.733 { 00:20:01.733 "name": "BaseBdev3", 00:20:01.733 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:01.733 "is_configured": true, 00:20:01.733 "data_offset": 2048, 00:20:01.733 "data_size": 63488 00:20:01.733 }, 00:20:01.733 { 00:20:01.733 "name": "BaseBdev4", 00:20:01.733 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:01.733 "is_configured": true, 00:20:01.733 "data_offset": 2048, 00:20:01.733 "data_size": 63488 00:20:01.733 } 00:20:01.733 ] 00:20:01.733 }' 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.733 15:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:02.667 [2024-12-06 15:46:45.622187] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:02.667 [2024-12-06 15:46:45.622320] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:02.667 [2024-12-06 15:46:45.622500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.667 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:02.667 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.667 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.667 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:02.667 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:02.667 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.667 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.667 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.667 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.667 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.667 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.667 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.667 "name": "raid_bdev1", 00:20:02.667 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:02.667 "strip_size_kb": 64, 00:20:02.667 "state": "online", 00:20:02.667 "raid_level": "raid5f", 00:20:02.667 "superblock": true, 00:20:02.667 "num_base_bdevs": 4, 00:20:02.667 "num_base_bdevs_discovered": 4, 00:20:02.667 "num_base_bdevs_operational": 4, 00:20:02.667 "base_bdevs_list": [ 00:20:02.668 { 00:20:02.668 "name": "spare", 00:20:02.668 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:20:02.668 "is_configured": true, 00:20:02.668 "data_offset": 2048, 00:20:02.668 "data_size": 63488 00:20:02.668 }, 00:20:02.668 { 00:20:02.668 "name": "BaseBdev2", 00:20:02.668 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:02.668 "is_configured": true, 00:20:02.668 "data_offset": 2048, 00:20:02.668 "data_size": 63488 00:20:02.668 }, 00:20:02.668 { 00:20:02.668 "name": "BaseBdev3", 00:20:02.668 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:02.668 "is_configured": true, 00:20:02.668 "data_offset": 2048, 00:20:02.668 "data_size": 63488 00:20:02.668 }, 00:20:02.668 { 00:20:02.668 "name": "BaseBdev4", 00:20:02.668 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:02.668 "is_configured": true, 00:20:02.668 "data_offset": 2048, 00:20:02.668 "data_size": 63488 00:20:02.668 } 00:20:02.668 ] 00:20:02.668 }' 00:20:02.668 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.668 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:02.668 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.668 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:02.668 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:02.668 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:02.668 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.668 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:02.668 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:02.668 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.668 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.668 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.668 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.668 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.668 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.927 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.927 "name": "raid_bdev1", 00:20:02.927 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:02.927 "strip_size_kb": 64, 00:20:02.927 "state": "online", 00:20:02.927 "raid_level": "raid5f", 00:20:02.927 "superblock": true, 00:20:02.927 "num_base_bdevs": 4, 00:20:02.927 "num_base_bdevs_discovered": 4, 00:20:02.927 "num_base_bdevs_operational": 4, 00:20:02.927 "base_bdevs_list": [ 00:20:02.927 { 00:20:02.927 "name": "spare", 00:20:02.927 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:20:02.927 "is_configured": true, 00:20:02.927 "data_offset": 2048, 00:20:02.927 "data_size": 63488 00:20:02.927 }, 00:20:02.927 { 00:20:02.927 "name": "BaseBdev2", 00:20:02.927 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:02.927 "is_configured": true, 00:20:02.927 "data_offset": 2048, 00:20:02.927 "data_size": 63488 00:20:02.927 }, 00:20:02.927 { 00:20:02.927 "name": "BaseBdev3", 00:20:02.927 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:02.927 "is_configured": true, 00:20:02.927 "data_offset": 2048, 00:20:02.927 "data_size": 63488 00:20:02.927 }, 00:20:02.927 { 00:20:02.927 "name": "BaseBdev4", 00:20:02.927 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:02.927 "is_configured": true, 00:20:02.927 "data_offset": 2048, 00:20:02.927 "data_size": 63488 00:20:02.927 } 00:20:02.927 ] 00:20:02.927 }' 00:20:02.927 15:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.927 "name": "raid_bdev1", 00:20:02.927 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:02.927 "strip_size_kb": 64, 00:20:02.927 "state": "online", 00:20:02.927 "raid_level": "raid5f", 00:20:02.927 "superblock": true, 00:20:02.927 "num_base_bdevs": 4, 00:20:02.927 "num_base_bdevs_discovered": 4, 00:20:02.927 "num_base_bdevs_operational": 4, 00:20:02.927 "base_bdevs_list": [ 00:20:02.927 { 00:20:02.927 "name": "spare", 00:20:02.927 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:20:02.927 "is_configured": true, 00:20:02.927 "data_offset": 2048, 00:20:02.927 "data_size": 63488 00:20:02.927 }, 00:20:02.927 { 00:20:02.927 "name": "BaseBdev2", 00:20:02.927 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:02.927 "is_configured": true, 00:20:02.927 "data_offset": 2048, 00:20:02.927 "data_size": 63488 00:20:02.927 }, 00:20:02.927 { 00:20:02.927 "name": "BaseBdev3", 00:20:02.927 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:02.927 "is_configured": true, 00:20:02.927 "data_offset": 2048, 00:20:02.927 "data_size": 63488 00:20:02.927 }, 00:20:02.927 { 00:20:02.927 "name": "BaseBdev4", 00:20:02.927 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:02.927 "is_configured": true, 00:20:02.927 "data_offset": 2048, 00:20:02.927 "data_size": 63488 00:20:02.927 } 00:20:02.927 ] 00:20:02.927 }' 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.927 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.496 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:03.496 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.496 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.496 [2024-12-06 15:46:46.500800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:03.496 [2024-12-06 15:46:46.500846] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:03.496 [2024-12-06 15:46:46.500956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.496 [2024-12-06 15:46:46.501076] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.496 [2024-12-06 15:46:46.501110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:03.496 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.496 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.496 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.496 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:03.496 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.496 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.496 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:03.496 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:03.497 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:03.497 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:03.497 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:03.497 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:03.497 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:03.497 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:03.497 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:03.497 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:03.497 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:03.497 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:03.497 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:03.497 /dev/nbd0 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.757 1+0 records in 00:20:03.757 1+0 records out 00:20:03.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358585 s, 11.4 MB/s 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:03.757 15:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:03.757 /dev/nbd1 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:04.017 1+0 records in 00:20:04.017 1+0 records out 00:20:04.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499827 s, 8.2 MB/s 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:04.017 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.277 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.537 [2024-12-06 15:46:47.802106] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:04.537 [2024-12-06 15:46:47.802178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.537 [2024-12-06 15:46:47.802210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:04.537 [2024-12-06 15:46:47.802224] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.537 [2024-12-06 15:46:47.805219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.537 [2024-12-06 15:46:47.805272] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:04.537 [2024-12-06 15:46:47.805389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:04.537 [2024-12-06 15:46:47.805459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:04.537 [2024-12-06 15:46:47.805645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:04.537 [2024-12-06 15:46:47.805759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:04.537 [2024-12-06 15:46:47.805860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:04.537 spare 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.537 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.798 [2024-12-06 15:46:47.905811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:04.798 [2024-12-06 15:46:47.905854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:04.798 [2024-12-06 15:46:47.906188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:20:04.798 [2024-12-06 15:46:47.913968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:04.798 [2024-12-06 15:46:47.914008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:04.798 [2024-12-06 15:46:47.914233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.798 "name": "raid_bdev1", 00:20:04.798 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:04.798 "strip_size_kb": 64, 00:20:04.798 "state": "online", 00:20:04.798 "raid_level": "raid5f", 00:20:04.798 "superblock": true, 00:20:04.798 "num_base_bdevs": 4, 00:20:04.798 "num_base_bdevs_discovered": 4, 00:20:04.798 "num_base_bdevs_operational": 4, 00:20:04.798 "base_bdevs_list": [ 00:20:04.798 { 00:20:04.798 "name": "spare", 00:20:04.798 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:20:04.798 "is_configured": true, 00:20:04.798 "data_offset": 2048, 00:20:04.798 "data_size": 63488 00:20:04.798 }, 00:20:04.798 { 00:20:04.798 "name": "BaseBdev2", 00:20:04.798 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:04.798 "is_configured": true, 00:20:04.798 "data_offset": 2048, 00:20:04.798 "data_size": 63488 00:20:04.798 }, 00:20:04.798 { 00:20:04.798 "name": "BaseBdev3", 00:20:04.798 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:04.798 "is_configured": true, 00:20:04.798 "data_offset": 2048, 00:20:04.798 "data_size": 63488 00:20:04.798 }, 00:20:04.798 { 00:20:04.798 "name": "BaseBdev4", 00:20:04.798 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:04.798 "is_configured": true, 00:20:04.798 "data_offset": 2048, 00:20:04.798 "data_size": 63488 00:20:04.798 } 00:20:04.798 ] 00:20:04.798 }' 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.798 15:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.057 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:05.057 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:05.057 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:05.057 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:05.057 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:05.057 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.057 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.057 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.057 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.057 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:05.317 "name": "raid_bdev1", 00:20:05.317 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:05.317 "strip_size_kb": 64, 00:20:05.317 "state": "online", 00:20:05.317 "raid_level": "raid5f", 00:20:05.317 "superblock": true, 00:20:05.317 "num_base_bdevs": 4, 00:20:05.317 "num_base_bdevs_discovered": 4, 00:20:05.317 "num_base_bdevs_operational": 4, 00:20:05.317 "base_bdevs_list": [ 00:20:05.317 { 00:20:05.317 "name": "spare", 00:20:05.317 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:20:05.317 "is_configured": true, 00:20:05.317 "data_offset": 2048, 00:20:05.317 "data_size": 63488 00:20:05.317 }, 00:20:05.317 { 00:20:05.317 "name": "BaseBdev2", 00:20:05.317 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:05.317 "is_configured": true, 00:20:05.317 "data_offset": 2048, 00:20:05.317 "data_size": 63488 00:20:05.317 }, 00:20:05.317 { 00:20:05.317 "name": "BaseBdev3", 00:20:05.317 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:05.317 "is_configured": true, 00:20:05.317 "data_offset": 2048, 00:20:05.317 "data_size": 63488 00:20:05.317 }, 00:20:05.317 { 00:20:05.317 "name": "BaseBdev4", 00:20:05.317 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:05.317 "is_configured": true, 00:20:05.317 "data_offset": 2048, 00:20:05.317 "data_size": 63488 00:20:05.317 } 00:20:05.317 ] 00:20:05.317 }' 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.317 [2024-12-06 15:46:48.511053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.317 "name": "raid_bdev1", 00:20:05.317 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:05.317 "strip_size_kb": 64, 00:20:05.317 "state": "online", 00:20:05.317 "raid_level": "raid5f", 00:20:05.317 "superblock": true, 00:20:05.317 "num_base_bdevs": 4, 00:20:05.317 "num_base_bdevs_discovered": 3, 00:20:05.317 "num_base_bdevs_operational": 3, 00:20:05.317 "base_bdevs_list": [ 00:20:05.317 { 00:20:05.317 "name": null, 00:20:05.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.317 "is_configured": false, 00:20:05.317 "data_offset": 0, 00:20:05.317 "data_size": 63488 00:20:05.317 }, 00:20:05.317 { 00:20:05.317 "name": "BaseBdev2", 00:20:05.317 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:05.317 "is_configured": true, 00:20:05.317 "data_offset": 2048, 00:20:05.317 "data_size": 63488 00:20:05.317 }, 00:20:05.317 { 00:20:05.317 "name": "BaseBdev3", 00:20:05.317 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:05.317 "is_configured": true, 00:20:05.317 "data_offset": 2048, 00:20:05.317 "data_size": 63488 00:20:05.317 }, 00:20:05.317 { 00:20:05.317 "name": "BaseBdev4", 00:20:05.317 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:05.317 "is_configured": true, 00:20:05.317 "data_offset": 2048, 00:20:05.317 "data_size": 63488 00:20:05.317 } 00:20:05.317 ] 00:20:05.317 }' 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.317 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.885 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:05.885 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.885 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.885 [2024-12-06 15:46:48.930513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:05.885 [2024-12-06 15:46:48.930769] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:05.885 [2024-12-06 15:46:48.930795] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:05.885 [2024-12-06 15:46:48.930845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:05.885 [2024-12-06 15:46:48.946456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:20:05.885 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.885 15:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:05.885 [2024-12-06 15:46:48.956849] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:06.822 15:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:06.822 15:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.822 15:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:06.822 15:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:06.822 15:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.822 15:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.822 15:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.822 15:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.822 15:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.822 15:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.822 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.822 "name": "raid_bdev1", 00:20:06.822 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:06.822 "strip_size_kb": 64, 00:20:06.822 "state": "online", 00:20:06.822 "raid_level": "raid5f", 00:20:06.822 "superblock": true, 00:20:06.822 "num_base_bdevs": 4, 00:20:06.822 "num_base_bdevs_discovered": 4, 00:20:06.822 "num_base_bdevs_operational": 4, 00:20:06.822 "process": { 00:20:06.822 "type": "rebuild", 00:20:06.822 "target": "spare", 00:20:06.822 "progress": { 00:20:06.822 "blocks": 19200, 00:20:06.822 "percent": 10 00:20:06.822 } 00:20:06.822 }, 00:20:06.822 "base_bdevs_list": [ 00:20:06.822 { 00:20:06.822 "name": "spare", 00:20:06.822 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:20:06.822 "is_configured": true, 00:20:06.822 "data_offset": 2048, 00:20:06.822 "data_size": 63488 00:20:06.822 }, 00:20:06.822 { 00:20:06.822 "name": "BaseBdev2", 00:20:06.822 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:06.822 "is_configured": true, 00:20:06.822 "data_offset": 2048, 00:20:06.822 "data_size": 63488 00:20:06.822 }, 00:20:06.822 { 00:20:06.822 "name": "BaseBdev3", 00:20:06.822 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:06.822 "is_configured": true, 00:20:06.822 "data_offset": 2048, 00:20:06.822 "data_size": 63488 00:20:06.822 }, 00:20:06.822 { 00:20:06.822 "name": "BaseBdev4", 00:20:06.822 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:06.822 "is_configured": true, 00:20:06.822 "data_offset": 2048, 00:20:06.822 "data_size": 63488 00:20:06.822 } 00:20:06.822 ] 00:20:06.822 }' 00:20:06.822 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.823 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:06.823 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.823 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:06.823 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:06.823 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.823 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.823 [2024-12-06 15:46:50.100185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:07.082 [2024-12-06 15:46:50.168028] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:07.082 [2024-12-06 15:46:50.168152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.082 [2024-12-06 15:46:50.168174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:07.082 [2024-12-06 15:46:50.168187] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.082 "name": "raid_bdev1", 00:20:07.082 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:07.082 "strip_size_kb": 64, 00:20:07.082 "state": "online", 00:20:07.082 "raid_level": "raid5f", 00:20:07.082 "superblock": true, 00:20:07.082 "num_base_bdevs": 4, 00:20:07.082 "num_base_bdevs_discovered": 3, 00:20:07.082 "num_base_bdevs_operational": 3, 00:20:07.082 "base_bdevs_list": [ 00:20:07.082 { 00:20:07.082 "name": null, 00:20:07.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.082 "is_configured": false, 00:20:07.082 "data_offset": 0, 00:20:07.082 "data_size": 63488 00:20:07.082 }, 00:20:07.082 { 00:20:07.082 "name": "BaseBdev2", 00:20:07.082 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:07.082 "is_configured": true, 00:20:07.082 "data_offset": 2048, 00:20:07.082 "data_size": 63488 00:20:07.082 }, 00:20:07.082 { 00:20:07.082 "name": "BaseBdev3", 00:20:07.082 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:07.082 "is_configured": true, 00:20:07.082 "data_offset": 2048, 00:20:07.082 "data_size": 63488 00:20:07.082 }, 00:20:07.082 { 00:20:07.082 "name": "BaseBdev4", 00:20:07.082 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:07.082 "is_configured": true, 00:20:07.082 "data_offset": 2048, 00:20:07.082 "data_size": 63488 00:20:07.082 } 00:20:07.082 ] 00:20:07.082 }' 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.082 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.341 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:07.341 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.341 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.600 [2024-12-06 15:46:50.639035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:07.600 [2024-12-06 15:46:50.639129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.600 [2024-12-06 15:46:50.639165] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:07.600 [2024-12-06 15:46:50.639182] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.600 [2024-12-06 15:46:50.639839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.600 [2024-12-06 15:46:50.639874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:07.600 [2024-12-06 15:46:50.640005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:07.600 [2024-12-06 15:46:50.640026] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:07.600 [2024-12-06 15:46:50.640040] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:07.600 [2024-12-06 15:46:50.640075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:07.600 [2024-12-06 15:46:50.655810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:20:07.600 spare 00:20:07.600 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.600 15:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:07.600 [2024-12-06 15:46:50.665562] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.537 "name": "raid_bdev1", 00:20:08.537 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:08.537 "strip_size_kb": 64, 00:20:08.537 "state": "online", 00:20:08.537 "raid_level": "raid5f", 00:20:08.537 "superblock": true, 00:20:08.537 "num_base_bdevs": 4, 00:20:08.537 "num_base_bdevs_discovered": 4, 00:20:08.537 "num_base_bdevs_operational": 4, 00:20:08.537 "process": { 00:20:08.537 "type": "rebuild", 00:20:08.537 "target": "spare", 00:20:08.537 "progress": { 00:20:08.537 "blocks": 17280, 00:20:08.537 "percent": 9 00:20:08.537 } 00:20:08.537 }, 00:20:08.537 "base_bdevs_list": [ 00:20:08.537 { 00:20:08.537 "name": "spare", 00:20:08.537 "uuid": "60f87804-545e-512c-9b00-27abf362ab7e", 00:20:08.537 "is_configured": true, 00:20:08.537 "data_offset": 2048, 00:20:08.537 "data_size": 63488 00:20:08.537 }, 00:20:08.537 { 00:20:08.537 "name": "BaseBdev2", 00:20:08.537 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:08.537 "is_configured": true, 00:20:08.537 "data_offset": 2048, 00:20:08.537 "data_size": 63488 00:20:08.537 }, 00:20:08.537 { 00:20:08.537 "name": "BaseBdev3", 00:20:08.537 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:08.537 "is_configured": true, 00:20:08.537 "data_offset": 2048, 00:20:08.537 "data_size": 63488 00:20:08.537 }, 00:20:08.537 { 00:20:08.537 "name": "BaseBdev4", 00:20:08.537 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:08.537 "is_configured": true, 00:20:08.537 "data_offset": 2048, 00:20:08.537 "data_size": 63488 00:20:08.537 } 00:20:08.537 ] 00:20:08.537 }' 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.537 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.537 [2024-12-06 15:46:51.800980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:08.796 [2024-12-06 15:46:51.875975] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:08.796 [2024-12-06 15:46:51.876191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.796 [2024-12-06 15:46:51.876225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:08.796 [2024-12-06 15:46:51.876237] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.796 "name": "raid_bdev1", 00:20:08.796 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:08.796 "strip_size_kb": 64, 00:20:08.796 "state": "online", 00:20:08.796 "raid_level": "raid5f", 00:20:08.796 "superblock": true, 00:20:08.796 "num_base_bdevs": 4, 00:20:08.796 "num_base_bdevs_discovered": 3, 00:20:08.796 "num_base_bdevs_operational": 3, 00:20:08.796 "base_bdevs_list": [ 00:20:08.796 { 00:20:08.796 "name": null, 00:20:08.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.796 "is_configured": false, 00:20:08.796 "data_offset": 0, 00:20:08.796 "data_size": 63488 00:20:08.796 }, 00:20:08.796 { 00:20:08.796 "name": "BaseBdev2", 00:20:08.796 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:08.796 "is_configured": true, 00:20:08.796 "data_offset": 2048, 00:20:08.796 "data_size": 63488 00:20:08.796 }, 00:20:08.796 { 00:20:08.796 "name": "BaseBdev3", 00:20:08.796 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:08.796 "is_configured": true, 00:20:08.796 "data_offset": 2048, 00:20:08.796 "data_size": 63488 00:20:08.796 }, 00:20:08.796 { 00:20:08.796 "name": "BaseBdev4", 00:20:08.796 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:08.796 "is_configured": true, 00:20:08.796 "data_offset": 2048, 00:20:08.796 "data_size": 63488 00:20:08.796 } 00:20:08.796 ] 00:20:08.796 }' 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.796 15:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.056 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:09.056 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.056 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:09.056 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:09.056 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.056 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.056 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.056 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.056 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.056 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.316 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.316 "name": "raid_bdev1", 00:20:09.316 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:09.316 "strip_size_kb": 64, 00:20:09.316 "state": "online", 00:20:09.316 "raid_level": "raid5f", 00:20:09.316 "superblock": true, 00:20:09.316 "num_base_bdevs": 4, 00:20:09.316 "num_base_bdevs_discovered": 3, 00:20:09.316 "num_base_bdevs_operational": 3, 00:20:09.316 "base_bdevs_list": [ 00:20:09.316 { 00:20:09.316 "name": null, 00:20:09.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.316 "is_configured": false, 00:20:09.316 "data_offset": 0, 00:20:09.316 "data_size": 63488 00:20:09.316 }, 00:20:09.316 { 00:20:09.316 "name": "BaseBdev2", 00:20:09.316 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:09.316 "is_configured": true, 00:20:09.316 "data_offset": 2048, 00:20:09.316 "data_size": 63488 00:20:09.316 }, 00:20:09.316 { 00:20:09.316 "name": "BaseBdev3", 00:20:09.316 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:09.316 "is_configured": true, 00:20:09.316 "data_offset": 2048, 00:20:09.316 "data_size": 63488 00:20:09.316 }, 00:20:09.316 { 00:20:09.316 "name": "BaseBdev4", 00:20:09.316 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:09.316 "is_configured": true, 00:20:09.316 "data_offset": 2048, 00:20:09.316 "data_size": 63488 00:20:09.316 } 00:20:09.316 ] 00:20:09.316 }' 00:20:09.316 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.316 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:09.316 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.316 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:09.316 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:09.316 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.316 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.316 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.316 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:09.316 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.316 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.316 [2024-12-06 15:46:52.458373] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:09.316 [2024-12-06 15:46:52.458442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.316 [2024-12-06 15:46:52.458473] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:20:09.316 [2024-12-06 15:46:52.458486] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.316 [2024-12-06 15:46:52.459085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.316 [2024-12-06 15:46:52.459113] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:09.316 [2024-12-06 15:46:52.459213] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:09.316 [2024-12-06 15:46:52.459230] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:09.316 [2024-12-06 15:46:52.459247] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:09.317 [2024-12-06 15:46:52.459261] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:09.317 BaseBdev1 00:20:09.317 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.317 15:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.254 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.254 "name": "raid_bdev1", 00:20:10.254 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:10.254 "strip_size_kb": 64, 00:20:10.254 "state": "online", 00:20:10.254 "raid_level": "raid5f", 00:20:10.254 "superblock": true, 00:20:10.254 "num_base_bdevs": 4, 00:20:10.254 "num_base_bdevs_discovered": 3, 00:20:10.254 "num_base_bdevs_operational": 3, 00:20:10.254 "base_bdevs_list": [ 00:20:10.254 { 00:20:10.254 "name": null, 00:20:10.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.255 "is_configured": false, 00:20:10.255 "data_offset": 0, 00:20:10.255 "data_size": 63488 00:20:10.255 }, 00:20:10.255 { 00:20:10.255 "name": "BaseBdev2", 00:20:10.255 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:10.255 "is_configured": true, 00:20:10.255 "data_offset": 2048, 00:20:10.255 "data_size": 63488 00:20:10.255 }, 00:20:10.255 { 00:20:10.255 "name": "BaseBdev3", 00:20:10.255 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:10.255 "is_configured": true, 00:20:10.255 "data_offset": 2048, 00:20:10.255 "data_size": 63488 00:20:10.255 }, 00:20:10.255 { 00:20:10.255 "name": "BaseBdev4", 00:20:10.255 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:10.255 "is_configured": true, 00:20:10.255 "data_offset": 2048, 00:20:10.255 "data_size": 63488 00:20:10.255 } 00:20:10.255 ] 00:20:10.255 }' 00:20:10.255 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.255 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.822 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:10.822 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.822 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:10.822 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:10.822 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.822 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.822 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.822 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.822 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.822 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.822 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.822 "name": "raid_bdev1", 00:20:10.822 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:10.822 "strip_size_kb": 64, 00:20:10.822 "state": "online", 00:20:10.822 "raid_level": "raid5f", 00:20:10.822 "superblock": true, 00:20:10.822 "num_base_bdevs": 4, 00:20:10.822 "num_base_bdevs_discovered": 3, 00:20:10.822 "num_base_bdevs_operational": 3, 00:20:10.822 "base_bdevs_list": [ 00:20:10.822 { 00:20:10.822 "name": null, 00:20:10.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.822 "is_configured": false, 00:20:10.822 "data_offset": 0, 00:20:10.822 "data_size": 63488 00:20:10.822 }, 00:20:10.822 { 00:20:10.822 "name": "BaseBdev2", 00:20:10.822 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:10.822 "is_configured": true, 00:20:10.822 "data_offset": 2048, 00:20:10.822 "data_size": 63488 00:20:10.822 }, 00:20:10.822 { 00:20:10.822 "name": "BaseBdev3", 00:20:10.822 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:10.822 "is_configured": true, 00:20:10.822 "data_offset": 2048, 00:20:10.822 "data_size": 63488 00:20:10.822 }, 00:20:10.822 { 00:20:10.822 "name": "BaseBdev4", 00:20:10.822 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:10.822 "is_configured": true, 00:20:10.822 "data_offset": 2048, 00:20:10.822 "data_size": 63488 00:20:10.822 } 00:20:10.822 ] 00:20:10.822 }' 00:20:10.822 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.822 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:10.822 15:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.822 [2024-12-06 15:46:54.020916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:10.822 [2024-12-06 15:46:54.021156] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:10.822 [2024-12-06 15:46:54.021177] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:10.822 request: 00:20:10.822 { 00:20:10.822 "base_bdev": "BaseBdev1", 00:20:10.822 "raid_bdev": "raid_bdev1", 00:20:10.822 "method": "bdev_raid_add_base_bdev", 00:20:10.822 "req_id": 1 00:20:10.822 } 00:20:10.822 Got JSON-RPC error response 00:20:10.822 response: 00:20:10.822 { 00:20:10.822 "code": -22, 00:20:10.822 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:10.822 } 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:10.822 15:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:11.757 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:11.757 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.757 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.757 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:11.757 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.757 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.757 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.757 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.757 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.757 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.757 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.757 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.757 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.757 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.015 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.015 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.015 "name": "raid_bdev1", 00:20:12.015 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:12.015 "strip_size_kb": 64, 00:20:12.015 "state": "online", 00:20:12.015 "raid_level": "raid5f", 00:20:12.015 "superblock": true, 00:20:12.015 "num_base_bdevs": 4, 00:20:12.015 "num_base_bdevs_discovered": 3, 00:20:12.015 "num_base_bdevs_operational": 3, 00:20:12.015 "base_bdevs_list": [ 00:20:12.015 { 00:20:12.015 "name": null, 00:20:12.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.015 "is_configured": false, 00:20:12.015 "data_offset": 0, 00:20:12.015 "data_size": 63488 00:20:12.015 }, 00:20:12.015 { 00:20:12.015 "name": "BaseBdev2", 00:20:12.015 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:12.015 "is_configured": true, 00:20:12.015 "data_offset": 2048, 00:20:12.015 "data_size": 63488 00:20:12.015 }, 00:20:12.015 { 00:20:12.015 "name": "BaseBdev3", 00:20:12.015 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:12.015 "is_configured": true, 00:20:12.015 "data_offset": 2048, 00:20:12.015 "data_size": 63488 00:20:12.015 }, 00:20:12.015 { 00:20:12.015 "name": "BaseBdev4", 00:20:12.015 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:12.015 "is_configured": true, 00:20:12.015 "data_offset": 2048, 00:20:12.015 "data_size": 63488 00:20:12.015 } 00:20:12.015 ] 00:20:12.015 }' 00:20:12.015 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.015 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.273 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:12.273 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.273 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:12.273 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:12.273 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.273 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.273 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.273 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.273 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.273 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.273 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.273 "name": "raid_bdev1", 00:20:12.273 "uuid": "8aab5c34-7f48-4b04-b60b-93e7e38bad85", 00:20:12.273 "strip_size_kb": 64, 00:20:12.273 "state": "online", 00:20:12.273 "raid_level": "raid5f", 00:20:12.273 "superblock": true, 00:20:12.273 "num_base_bdevs": 4, 00:20:12.273 "num_base_bdevs_discovered": 3, 00:20:12.273 "num_base_bdevs_operational": 3, 00:20:12.273 "base_bdevs_list": [ 00:20:12.273 { 00:20:12.273 "name": null, 00:20:12.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.273 "is_configured": false, 00:20:12.273 "data_offset": 0, 00:20:12.273 "data_size": 63488 00:20:12.273 }, 00:20:12.273 { 00:20:12.273 "name": "BaseBdev2", 00:20:12.273 "uuid": "0415c438-af98-5c75-8095-ffa857ed6a06", 00:20:12.273 "is_configured": true, 00:20:12.273 "data_offset": 2048, 00:20:12.273 "data_size": 63488 00:20:12.273 }, 00:20:12.273 { 00:20:12.273 "name": "BaseBdev3", 00:20:12.273 "uuid": "90e0d0b2-8d6f-5c28-a1ea-d1b8c3ffd86d", 00:20:12.273 "is_configured": true, 00:20:12.273 "data_offset": 2048, 00:20:12.273 "data_size": 63488 00:20:12.273 }, 00:20:12.273 { 00:20:12.273 "name": "BaseBdev4", 00:20:12.273 "uuid": "fef0820f-4c4a-52fa-b7aa-810b65e30c35", 00:20:12.273 "is_configured": true, 00:20:12.274 "data_offset": 2048, 00:20:12.274 "data_size": 63488 00:20:12.274 } 00:20:12.274 ] 00:20:12.274 }' 00:20:12.274 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.274 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:12.274 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.541 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:12.541 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85123 00:20:12.541 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85123 ']' 00:20:12.541 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85123 00:20:12.541 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:12.541 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.541 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85123 00:20:12.541 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.541 killing process with pid 85123 00:20:12.541 Received shutdown signal, test time was about 60.000000 seconds 00:20:12.541 00:20:12.541 Latency(us) 00:20:12.541 [2024-12-06T15:46:55.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.541 [2024-12-06T15:46:55.836Z] =================================================================================================================== 00:20:12.541 [2024-12-06T15:46:55.836Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:12.541 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.541 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85123' 00:20:12.541 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85123 00:20:12.541 [2024-12-06 15:46:55.654408] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:12.541 15:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85123 00:20:12.541 [2024-12-06 15:46:55.654593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.541 [2024-12-06 15:46:55.654687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.541 [2024-12-06 15:46:55.654705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:13.109 [2024-12-06 15:46:56.179279] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:14.547 15:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:14.547 00:20:14.547 real 0m26.892s 00:20:14.547 user 0m33.105s 00:20:14.547 sys 0m3.554s 00:20:14.547 15:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.547 15:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.547 ************************************ 00:20:14.547 END TEST raid5f_rebuild_test_sb 00:20:14.547 ************************************ 00:20:14.547 15:46:57 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:20:14.547 15:46:57 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:20:14.547 15:46:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:14.547 15:46:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.547 15:46:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:14.547 ************************************ 00:20:14.547 START TEST raid_state_function_test_sb_4k 00:20:14.547 ************************************ 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:14.547 Process raid pid: 85932 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85932 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85932' 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85932 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85932 ']' 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.547 15:46:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:14.547 [2024-12-06 15:46:57.593190] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:20:14.547 [2024-12-06 15:46:57.593354] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.547 [2024-12-06 15:46:57.766657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.805 [2024-12-06 15:46:57.902144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.063 [2024-12-06 15:46:58.144706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:15.063 [2024-12-06 15:46:58.144742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.323 [2024-12-06 15:46:58.428796] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:15.323 [2024-12-06 15:46:58.428865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:15.323 [2024-12-06 15:46:58.428878] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:15.323 [2024-12-06 15:46:58.428892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.323 "name": "Existed_Raid", 00:20:15.323 "uuid": "638069dc-7cf6-48d3-9dba-90b722859c76", 00:20:15.323 "strip_size_kb": 0, 00:20:15.323 "state": "configuring", 00:20:15.323 "raid_level": "raid1", 00:20:15.323 "superblock": true, 00:20:15.323 "num_base_bdevs": 2, 00:20:15.323 "num_base_bdevs_discovered": 0, 00:20:15.323 "num_base_bdevs_operational": 2, 00:20:15.323 "base_bdevs_list": [ 00:20:15.323 { 00:20:15.323 "name": "BaseBdev1", 00:20:15.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.323 "is_configured": false, 00:20:15.323 "data_offset": 0, 00:20:15.323 "data_size": 0 00:20:15.323 }, 00:20:15.323 { 00:20:15.323 "name": "BaseBdev2", 00:20:15.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.323 "is_configured": false, 00:20:15.323 "data_offset": 0, 00:20:15.323 "data_size": 0 00:20:15.323 } 00:20:15.323 ] 00:20:15.323 }' 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.323 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.582 [2024-12-06 15:46:58.808246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:15.582 [2024-12-06 15:46:58.808417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.582 [2024-12-06 15:46:58.816226] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:15.582 [2024-12-06 15:46:58.816273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:15.582 [2024-12-06 15:46:58.816284] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:15.582 [2024-12-06 15:46:58.816301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.582 [2024-12-06 15:46:58.868835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:15.582 BaseBdev1 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.582 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.841 [ 00:20:15.841 { 00:20:15.841 "name": "BaseBdev1", 00:20:15.841 "aliases": [ 00:20:15.841 "516909ce-d755-4d62-a5c8-25ffd2ecb62d" 00:20:15.841 ], 00:20:15.841 "product_name": "Malloc disk", 00:20:15.841 "block_size": 4096, 00:20:15.841 "num_blocks": 8192, 00:20:15.841 "uuid": "516909ce-d755-4d62-a5c8-25ffd2ecb62d", 00:20:15.841 "assigned_rate_limits": { 00:20:15.841 "rw_ios_per_sec": 0, 00:20:15.841 "rw_mbytes_per_sec": 0, 00:20:15.841 "r_mbytes_per_sec": 0, 00:20:15.841 "w_mbytes_per_sec": 0 00:20:15.841 }, 00:20:15.841 "claimed": true, 00:20:15.841 "claim_type": "exclusive_write", 00:20:15.841 "zoned": false, 00:20:15.841 "supported_io_types": { 00:20:15.841 "read": true, 00:20:15.841 "write": true, 00:20:15.841 "unmap": true, 00:20:15.841 "flush": true, 00:20:15.841 "reset": true, 00:20:15.841 "nvme_admin": false, 00:20:15.841 "nvme_io": false, 00:20:15.841 "nvme_io_md": false, 00:20:15.841 "write_zeroes": true, 00:20:15.841 "zcopy": true, 00:20:15.841 "get_zone_info": false, 00:20:15.841 "zone_management": false, 00:20:15.841 "zone_append": false, 00:20:15.841 "compare": false, 00:20:15.841 "compare_and_write": false, 00:20:15.841 "abort": true, 00:20:15.841 "seek_hole": false, 00:20:15.841 "seek_data": false, 00:20:15.841 "copy": true, 00:20:15.841 "nvme_iov_md": false 00:20:15.841 }, 00:20:15.841 "memory_domains": [ 00:20:15.841 { 00:20:15.841 "dma_device_id": "system", 00:20:15.841 "dma_device_type": 1 00:20:15.841 }, 00:20:15.841 { 00:20:15.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.841 "dma_device_type": 2 00:20:15.841 } 00:20:15.841 ], 00:20:15.841 "driver_specific": {} 00:20:15.841 } 00:20:15.841 ] 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.841 "name": "Existed_Raid", 00:20:15.841 "uuid": "7705cbf2-37dd-4087-b78e-d357a2c3f5c7", 00:20:15.841 "strip_size_kb": 0, 00:20:15.841 "state": "configuring", 00:20:15.841 "raid_level": "raid1", 00:20:15.841 "superblock": true, 00:20:15.841 "num_base_bdevs": 2, 00:20:15.841 "num_base_bdevs_discovered": 1, 00:20:15.841 "num_base_bdevs_operational": 2, 00:20:15.841 "base_bdevs_list": [ 00:20:15.841 { 00:20:15.841 "name": "BaseBdev1", 00:20:15.841 "uuid": "516909ce-d755-4d62-a5c8-25ffd2ecb62d", 00:20:15.841 "is_configured": true, 00:20:15.841 "data_offset": 256, 00:20:15.841 "data_size": 7936 00:20:15.841 }, 00:20:15.841 { 00:20:15.841 "name": "BaseBdev2", 00:20:15.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.841 "is_configured": false, 00:20:15.841 "data_offset": 0, 00:20:15.841 "data_size": 0 00:20:15.841 } 00:20:15.841 ] 00:20:15.841 }' 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.841 15:46:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.101 [2024-12-06 15:46:59.344394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:16.101 [2024-12-06 15:46:59.344577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.101 [2024-12-06 15:46:59.356446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:16.101 [2024-12-06 15:46:59.359005] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:16.101 [2024-12-06 15:46:59.359053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.101 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.360 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.360 "name": "Existed_Raid", 00:20:16.360 "uuid": "476ddab4-52ee-47ab-ab5e-7968eadaa2b4", 00:20:16.360 "strip_size_kb": 0, 00:20:16.360 "state": "configuring", 00:20:16.360 "raid_level": "raid1", 00:20:16.360 "superblock": true, 00:20:16.360 "num_base_bdevs": 2, 00:20:16.360 "num_base_bdevs_discovered": 1, 00:20:16.360 "num_base_bdevs_operational": 2, 00:20:16.360 "base_bdevs_list": [ 00:20:16.360 { 00:20:16.360 "name": "BaseBdev1", 00:20:16.360 "uuid": "516909ce-d755-4d62-a5c8-25ffd2ecb62d", 00:20:16.360 "is_configured": true, 00:20:16.360 "data_offset": 256, 00:20:16.360 "data_size": 7936 00:20:16.360 }, 00:20:16.360 { 00:20:16.360 "name": "BaseBdev2", 00:20:16.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.360 "is_configured": false, 00:20:16.360 "data_offset": 0, 00:20:16.360 "data_size": 0 00:20:16.360 } 00:20:16.360 ] 00:20:16.360 }' 00:20:16.360 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.360 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.620 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.621 [2024-12-06 15:46:59.797918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:16.621 [2024-12-06 15:46:59.798398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:16.621 [2024-12-06 15:46:59.798422] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:16.621 BaseBdev2 00:20:16.621 [2024-12-06 15:46:59.798756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:16.621 [2024-12-06 15:46:59.798975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:16.621 [2024-12-06 15:46:59.798994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:16.621 [2024-12-06 15:46:59.799156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.621 [ 00:20:16.621 { 00:20:16.621 "name": "BaseBdev2", 00:20:16.621 "aliases": [ 00:20:16.621 "c7794970-632a-4fe6-8c62-1d285f60e503" 00:20:16.621 ], 00:20:16.621 "product_name": "Malloc disk", 00:20:16.621 "block_size": 4096, 00:20:16.621 "num_blocks": 8192, 00:20:16.621 "uuid": "c7794970-632a-4fe6-8c62-1d285f60e503", 00:20:16.621 "assigned_rate_limits": { 00:20:16.621 "rw_ios_per_sec": 0, 00:20:16.621 "rw_mbytes_per_sec": 0, 00:20:16.621 "r_mbytes_per_sec": 0, 00:20:16.621 "w_mbytes_per_sec": 0 00:20:16.621 }, 00:20:16.621 "claimed": true, 00:20:16.621 "claim_type": "exclusive_write", 00:20:16.621 "zoned": false, 00:20:16.621 "supported_io_types": { 00:20:16.621 "read": true, 00:20:16.621 "write": true, 00:20:16.621 "unmap": true, 00:20:16.621 "flush": true, 00:20:16.621 "reset": true, 00:20:16.621 "nvme_admin": false, 00:20:16.621 "nvme_io": false, 00:20:16.621 "nvme_io_md": false, 00:20:16.621 "write_zeroes": true, 00:20:16.621 "zcopy": true, 00:20:16.621 "get_zone_info": false, 00:20:16.621 "zone_management": false, 00:20:16.621 "zone_append": false, 00:20:16.621 "compare": false, 00:20:16.621 "compare_and_write": false, 00:20:16.621 "abort": true, 00:20:16.621 "seek_hole": false, 00:20:16.621 "seek_data": false, 00:20:16.621 "copy": true, 00:20:16.621 "nvme_iov_md": false 00:20:16.621 }, 00:20:16.621 "memory_domains": [ 00:20:16.621 { 00:20:16.621 "dma_device_id": "system", 00:20:16.621 "dma_device_type": 1 00:20:16.621 }, 00:20:16.621 { 00:20:16.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.621 "dma_device_type": 2 00:20:16.621 } 00:20:16.621 ], 00:20:16.621 "driver_specific": {} 00:20:16.621 } 00:20:16.621 ] 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.621 "name": "Existed_Raid", 00:20:16.621 "uuid": "476ddab4-52ee-47ab-ab5e-7968eadaa2b4", 00:20:16.621 "strip_size_kb": 0, 00:20:16.621 "state": "online", 00:20:16.621 "raid_level": "raid1", 00:20:16.621 "superblock": true, 00:20:16.621 "num_base_bdevs": 2, 00:20:16.621 "num_base_bdevs_discovered": 2, 00:20:16.621 "num_base_bdevs_operational": 2, 00:20:16.621 "base_bdevs_list": [ 00:20:16.621 { 00:20:16.621 "name": "BaseBdev1", 00:20:16.621 "uuid": "516909ce-d755-4d62-a5c8-25ffd2ecb62d", 00:20:16.621 "is_configured": true, 00:20:16.621 "data_offset": 256, 00:20:16.621 "data_size": 7936 00:20:16.621 }, 00:20:16.621 { 00:20:16.621 "name": "BaseBdev2", 00:20:16.621 "uuid": "c7794970-632a-4fe6-8c62-1d285f60e503", 00:20:16.621 "is_configured": true, 00:20:16.621 "data_offset": 256, 00:20:16.621 "data_size": 7936 00:20:16.621 } 00:20:16.621 ] 00:20:16.621 }' 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.621 15:46:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.192 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:17.192 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:17.192 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:17.192 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:17.192 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:17.192 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:17.192 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:17.192 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:17.192 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.192 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.192 [2024-12-06 15:47:00.230122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.192 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.192 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:17.192 "name": "Existed_Raid", 00:20:17.192 "aliases": [ 00:20:17.192 "476ddab4-52ee-47ab-ab5e-7968eadaa2b4" 00:20:17.192 ], 00:20:17.192 "product_name": "Raid Volume", 00:20:17.192 "block_size": 4096, 00:20:17.192 "num_blocks": 7936, 00:20:17.192 "uuid": "476ddab4-52ee-47ab-ab5e-7968eadaa2b4", 00:20:17.192 "assigned_rate_limits": { 00:20:17.192 "rw_ios_per_sec": 0, 00:20:17.192 "rw_mbytes_per_sec": 0, 00:20:17.192 "r_mbytes_per_sec": 0, 00:20:17.192 "w_mbytes_per_sec": 0 00:20:17.192 }, 00:20:17.192 "claimed": false, 00:20:17.192 "zoned": false, 00:20:17.192 "supported_io_types": { 00:20:17.192 "read": true, 00:20:17.192 "write": true, 00:20:17.192 "unmap": false, 00:20:17.192 "flush": false, 00:20:17.192 "reset": true, 00:20:17.192 "nvme_admin": false, 00:20:17.192 "nvme_io": false, 00:20:17.192 "nvme_io_md": false, 00:20:17.192 "write_zeroes": true, 00:20:17.192 "zcopy": false, 00:20:17.192 "get_zone_info": false, 00:20:17.192 "zone_management": false, 00:20:17.192 "zone_append": false, 00:20:17.192 "compare": false, 00:20:17.192 "compare_and_write": false, 00:20:17.192 "abort": false, 00:20:17.192 "seek_hole": false, 00:20:17.192 "seek_data": false, 00:20:17.192 "copy": false, 00:20:17.192 "nvme_iov_md": false 00:20:17.192 }, 00:20:17.192 "memory_domains": [ 00:20:17.192 { 00:20:17.192 "dma_device_id": "system", 00:20:17.192 "dma_device_type": 1 00:20:17.192 }, 00:20:17.192 { 00:20:17.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.192 "dma_device_type": 2 00:20:17.192 }, 00:20:17.192 { 00:20:17.192 "dma_device_id": "system", 00:20:17.192 "dma_device_type": 1 00:20:17.192 }, 00:20:17.192 { 00:20:17.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.192 "dma_device_type": 2 00:20:17.192 } 00:20:17.192 ], 00:20:17.192 "driver_specific": { 00:20:17.192 "raid": { 00:20:17.192 "uuid": "476ddab4-52ee-47ab-ab5e-7968eadaa2b4", 00:20:17.192 "strip_size_kb": 0, 00:20:17.192 "state": "online", 00:20:17.192 "raid_level": "raid1", 00:20:17.192 "superblock": true, 00:20:17.192 "num_base_bdevs": 2, 00:20:17.192 "num_base_bdevs_discovered": 2, 00:20:17.192 "num_base_bdevs_operational": 2, 00:20:17.192 "base_bdevs_list": [ 00:20:17.192 { 00:20:17.192 "name": "BaseBdev1", 00:20:17.192 "uuid": "516909ce-d755-4d62-a5c8-25ffd2ecb62d", 00:20:17.192 "is_configured": true, 00:20:17.192 "data_offset": 256, 00:20:17.192 "data_size": 7936 00:20:17.192 }, 00:20:17.192 { 00:20:17.192 "name": "BaseBdev2", 00:20:17.192 "uuid": "c7794970-632a-4fe6-8c62-1d285f60e503", 00:20:17.192 "is_configured": true, 00:20:17.192 "data_offset": 256, 00:20:17.192 "data_size": 7936 00:20:17.192 } 00:20:17.192 ] 00:20:17.192 } 00:20:17.192 } 00:20:17.192 }' 00:20:17.192 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:17.193 BaseBdev2' 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.193 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.193 [2024-12-06 15:47:00.445718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.453 "name": "Existed_Raid", 00:20:17.453 "uuid": "476ddab4-52ee-47ab-ab5e-7968eadaa2b4", 00:20:17.453 "strip_size_kb": 0, 00:20:17.453 "state": "online", 00:20:17.453 "raid_level": "raid1", 00:20:17.453 "superblock": true, 00:20:17.453 "num_base_bdevs": 2, 00:20:17.453 "num_base_bdevs_discovered": 1, 00:20:17.453 "num_base_bdevs_operational": 1, 00:20:17.453 "base_bdevs_list": [ 00:20:17.453 { 00:20:17.453 "name": null, 00:20:17.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.453 "is_configured": false, 00:20:17.453 "data_offset": 0, 00:20:17.453 "data_size": 7936 00:20:17.453 }, 00:20:17.453 { 00:20:17.453 "name": "BaseBdev2", 00:20:17.453 "uuid": "c7794970-632a-4fe6-8c62-1d285f60e503", 00:20:17.453 "is_configured": true, 00:20:17.453 "data_offset": 256, 00:20:17.453 "data_size": 7936 00:20:17.453 } 00:20:17.453 ] 00:20:17.453 }' 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.453 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.713 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:17.713 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:17.713 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.713 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:17.713 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.713 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.713 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.713 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:17.713 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:17.713 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:17.713 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.713 15:47:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.713 [2024-12-06 15:47:00.996689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:17.713 [2024-12-06 15:47:00.996822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:17.973 [2024-12-06 15:47:01.100977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:17.973 [2024-12-06 15:47:01.101048] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:17.973 [2024-12-06 15:47:01.101065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85932 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85932 ']' 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85932 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85932 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.973 killing process with pid 85932 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85932' 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85932 00:20:17.973 [2024-12-06 15:47:01.204197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:17.973 15:47:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85932 00:20:17.973 [2024-12-06 15:47:01.223413] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:19.353 15:47:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:20:19.353 00:20:19.353 real 0m4.962s 00:20:19.353 user 0m6.861s 00:20:19.353 sys 0m1.060s 00:20:19.353 15:47:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.353 ************************************ 00:20:19.353 END TEST raid_state_function_test_sb_4k 00:20:19.353 ************************************ 00:20:19.353 15:47:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.353 15:47:02 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:20:19.353 15:47:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:19.353 15:47:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:19.353 15:47:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:19.353 ************************************ 00:20:19.353 START TEST raid_superblock_test_4k 00:20:19.353 ************************************ 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86180 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:19.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86180 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86180 ']' 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.353 15:47:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.353 [2024-12-06 15:47:02.637639] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:20:19.353 [2024-12-06 15:47:02.637815] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86180 ] 00:20:19.613 [2024-12-06 15:47:02.821298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.873 [2024-12-06 15:47:02.949827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.133 [2024-12-06 15:47:03.190151] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:20.133 [2024-12-06 15:47:03.190190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:20.393 malloc1 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.393 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:20.393 [2024-12-06 15:47:03.526174] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:20.394 [2024-12-06 15:47:03.526249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:20.394 [2024-12-06 15:47:03.526291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:20.394 [2024-12-06 15:47:03.526304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:20.394 [2024-12-06 15:47:03.528992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:20.394 [2024-12-06 15:47:03.529031] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:20.394 pt1 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:20.394 malloc2 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:20.394 [2024-12-06 15:47:03.591966] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:20.394 [2024-12-06 15:47:03.592028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:20.394 [2024-12-06 15:47:03.592078] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:20.394 [2024-12-06 15:47:03.592090] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:20.394 [2024-12-06 15:47:03.594833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:20.394 [2024-12-06 15:47:03.594885] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:20.394 pt2 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:20.394 [2024-12-06 15:47:03.604024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:20.394 [2024-12-06 15:47:03.606450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:20.394 [2024-12-06 15:47:03.606649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:20.394 [2024-12-06 15:47:03.606669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:20.394 [2024-12-06 15:47:03.606941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:20.394 [2024-12-06 15:47:03.607116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:20.394 [2024-12-06 15:47:03.607136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:20.394 [2024-12-06 15:47:03.607298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.394 "name": "raid_bdev1", 00:20:20.394 "uuid": "90be4a77-1721-4055-a56e-1b2cdf0ba64d", 00:20:20.394 "strip_size_kb": 0, 00:20:20.394 "state": "online", 00:20:20.394 "raid_level": "raid1", 00:20:20.394 "superblock": true, 00:20:20.394 "num_base_bdevs": 2, 00:20:20.394 "num_base_bdevs_discovered": 2, 00:20:20.394 "num_base_bdevs_operational": 2, 00:20:20.394 "base_bdevs_list": [ 00:20:20.394 { 00:20:20.394 "name": "pt1", 00:20:20.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:20.394 "is_configured": true, 00:20:20.394 "data_offset": 256, 00:20:20.394 "data_size": 7936 00:20:20.394 }, 00:20:20.394 { 00:20:20.394 "name": "pt2", 00:20:20.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:20.394 "is_configured": true, 00:20:20.394 "data_offset": 256, 00:20:20.394 "data_size": 7936 00:20:20.394 } 00:20:20.394 ] 00:20:20.394 }' 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.394 15:47:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:20.962 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:20.962 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:20.962 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:20.962 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:20.962 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:20.962 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:20.962 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:20.962 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.962 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:20.962 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:20.962 [2024-12-06 15:47:04.043667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:20.962 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.962 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:20.962 "name": "raid_bdev1", 00:20:20.962 "aliases": [ 00:20:20.962 "90be4a77-1721-4055-a56e-1b2cdf0ba64d" 00:20:20.963 ], 00:20:20.963 "product_name": "Raid Volume", 00:20:20.963 "block_size": 4096, 00:20:20.963 "num_blocks": 7936, 00:20:20.963 "uuid": "90be4a77-1721-4055-a56e-1b2cdf0ba64d", 00:20:20.963 "assigned_rate_limits": { 00:20:20.963 "rw_ios_per_sec": 0, 00:20:20.963 "rw_mbytes_per_sec": 0, 00:20:20.963 "r_mbytes_per_sec": 0, 00:20:20.963 "w_mbytes_per_sec": 0 00:20:20.963 }, 00:20:20.963 "claimed": false, 00:20:20.963 "zoned": false, 00:20:20.963 "supported_io_types": { 00:20:20.963 "read": true, 00:20:20.963 "write": true, 00:20:20.963 "unmap": false, 00:20:20.963 "flush": false, 00:20:20.963 "reset": true, 00:20:20.963 "nvme_admin": false, 00:20:20.963 "nvme_io": false, 00:20:20.963 "nvme_io_md": false, 00:20:20.963 "write_zeroes": true, 00:20:20.963 "zcopy": false, 00:20:20.963 "get_zone_info": false, 00:20:20.963 "zone_management": false, 00:20:20.963 "zone_append": false, 00:20:20.963 "compare": false, 00:20:20.963 "compare_and_write": false, 00:20:20.963 "abort": false, 00:20:20.963 "seek_hole": false, 00:20:20.963 "seek_data": false, 00:20:20.963 "copy": false, 00:20:20.963 "nvme_iov_md": false 00:20:20.963 }, 00:20:20.963 "memory_domains": [ 00:20:20.963 { 00:20:20.963 "dma_device_id": "system", 00:20:20.963 "dma_device_type": 1 00:20:20.963 }, 00:20:20.963 { 00:20:20.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.963 "dma_device_type": 2 00:20:20.963 }, 00:20:20.963 { 00:20:20.963 "dma_device_id": "system", 00:20:20.963 "dma_device_type": 1 00:20:20.963 }, 00:20:20.963 { 00:20:20.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.963 "dma_device_type": 2 00:20:20.963 } 00:20:20.963 ], 00:20:20.963 "driver_specific": { 00:20:20.963 "raid": { 00:20:20.963 "uuid": "90be4a77-1721-4055-a56e-1b2cdf0ba64d", 00:20:20.963 "strip_size_kb": 0, 00:20:20.963 "state": "online", 00:20:20.963 "raid_level": "raid1", 00:20:20.963 "superblock": true, 00:20:20.963 "num_base_bdevs": 2, 00:20:20.963 "num_base_bdevs_discovered": 2, 00:20:20.963 "num_base_bdevs_operational": 2, 00:20:20.963 "base_bdevs_list": [ 00:20:20.963 { 00:20:20.963 "name": "pt1", 00:20:20.963 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:20.963 "is_configured": true, 00:20:20.963 "data_offset": 256, 00:20:20.963 "data_size": 7936 00:20:20.963 }, 00:20:20.963 { 00:20:20.963 "name": "pt2", 00:20:20.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:20.963 "is_configured": true, 00:20:20.963 "data_offset": 256, 00:20:20.963 "data_size": 7936 00:20:20.963 } 00:20:20.963 ] 00:20:20.963 } 00:20:20.963 } 00:20:20.963 }' 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:20.963 pt2' 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.963 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.222 [2024-12-06 15:47:04.259292] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=90be4a77-1721-4055-a56e-1b2cdf0ba64d 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 90be4a77-1721-4055-a56e-1b2cdf0ba64d ']' 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.222 [2024-12-06 15:47:04.298964] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:21.222 [2024-12-06 15:47:04.298995] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:21.222 [2024-12-06 15:47:04.299097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:21.222 [2024-12-06 15:47:04.299162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:21.222 [2024-12-06 15:47:04.299180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.222 [2024-12-06 15:47:04.430817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:21.222 [2024-12-06 15:47:04.433211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:21.222 [2024-12-06 15:47:04.433298] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:21.222 [2024-12-06 15:47:04.433358] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:21.222 [2024-12-06 15:47:04.433377] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:21.222 [2024-12-06 15:47:04.433389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:21.222 request: 00:20:21.222 { 00:20:21.222 "name": "raid_bdev1", 00:20:21.222 "raid_level": "raid1", 00:20:21.222 "base_bdevs": [ 00:20:21.222 "malloc1", 00:20:21.222 "malloc2" 00:20:21.222 ], 00:20:21.222 "superblock": false, 00:20:21.222 "method": "bdev_raid_create", 00:20:21.222 "req_id": 1 00:20:21.222 } 00:20:21.222 Got JSON-RPC error response 00:20:21.222 response: 00:20:21.222 { 00:20:21.222 "code": -17, 00:20:21.222 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:21.222 } 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:21.222 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.223 [2024-12-06 15:47:04.498697] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:21.223 [2024-12-06 15:47:04.498755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.223 [2024-12-06 15:47:04.498780] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:21.223 [2024-12-06 15:47:04.498794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.223 [2024-12-06 15:47:04.501546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.223 [2024-12-06 15:47:04.501581] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:21.223 [2024-12-06 15:47:04.501665] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:21.223 [2024-12-06 15:47:04.501727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:21.223 pt1 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.223 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.481 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.481 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.481 "name": "raid_bdev1", 00:20:21.481 "uuid": "90be4a77-1721-4055-a56e-1b2cdf0ba64d", 00:20:21.481 "strip_size_kb": 0, 00:20:21.481 "state": "configuring", 00:20:21.481 "raid_level": "raid1", 00:20:21.481 "superblock": true, 00:20:21.481 "num_base_bdevs": 2, 00:20:21.481 "num_base_bdevs_discovered": 1, 00:20:21.481 "num_base_bdevs_operational": 2, 00:20:21.481 "base_bdevs_list": [ 00:20:21.481 { 00:20:21.481 "name": "pt1", 00:20:21.481 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:21.481 "is_configured": true, 00:20:21.481 "data_offset": 256, 00:20:21.481 "data_size": 7936 00:20:21.481 }, 00:20:21.481 { 00:20:21.481 "name": null, 00:20:21.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:21.481 "is_configured": false, 00:20:21.481 "data_offset": 256, 00:20:21.481 "data_size": 7936 00:20:21.482 } 00:20:21.482 ] 00:20:21.482 }' 00:20:21.482 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.482 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.741 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:21.741 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:21.741 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:21.741 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:21.741 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.741 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.741 [2024-12-06 15:47:04.898224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:21.742 [2024-12-06 15:47:04.898307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.742 [2024-12-06 15:47:04.898335] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:21.742 [2024-12-06 15:47:04.898350] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.742 [2024-12-06 15:47:04.898882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.742 [2024-12-06 15:47:04.898914] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:21.742 [2024-12-06 15:47:04.899006] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:21.742 [2024-12-06 15:47:04.899041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:21.742 [2024-12-06 15:47:04.899177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:21.742 [2024-12-06 15:47:04.899192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:21.742 [2024-12-06 15:47:04.899477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:21.742 [2024-12-06 15:47:04.899669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:21.742 [2024-12-06 15:47:04.899680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:21.742 [2024-12-06 15:47:04.899823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.742 pt2 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.742 "name": "raid_bdev1", 00:20:21.742 "uuid": "90be4a77-1721-4055-a56e-1b2cdf0ba64d", 00:20:21.742 "strip_size_kb": 0, 00:20:21.742 "state": "online", 00:20:21.742 "raid_level": "raid1", 00:20:21.742 "superblock": true, 00:20:21.742 "num_base_bdevs": 2, 00:20:21.742 "num_base_bdevs_discovered": 2, 00:20:21.742 "num_base_bdevs_operational": 2, 00:20:21.742 "base_bdevs_list": [ 00:20:21.742 { 00:20:21.742 "name": "pt1", 00:20:21.742 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:21.742 "is_configured": true, 00:20:21.742 "data_offset": 256, 00:20:21.742 "data_size": 7936 00:20:21.742 }, 00:20:21.742 { 00:20:21.742 "name": "pt2", 00:20:21.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:21.742 "is_configured": true, 00:20:21.742 "data_offset": 256, 00:20:21.742 "data_size": 7936 00:20:21.742 } 00:20:21.742 ] 00:20:21.742 }' 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.742 15:47:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.316 [2024-12-06 15:47:05.321961] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:22.316 "name": "raid_bdev1", 00:20:22.316 "aliases": [ 00:20:22.316 "90be4a77-1721-4055-a56e-1b2cdf0ba64d" 00:20:22.316 ], 00:20:22.316 "product_name": "Raid Volume", 00:20:22.316 "block_size": 4096, 00:20:22.316 "num_blocks": 7936, 00:20:22.316 "uuid": "90be4a77-1721-4055-a56e-1b2cdf0ba64d", 00:20:22.316 "assigned_rate_limits": { 00:20:22.316 "rw_ios_per_sec": 0, 00:20:22.316 "rw_mbytes_per_sec": 0, 00:20:22.316 "r_mbytes_per_sec": 0, 00:20:22.316 "w_mbytes_per_sec": 0 00:20:22.316 }, 00:20:22.316 "claimed": false, 00:20:22.316 "zoned": false, 00:20:22.316 "supported_io_types": { 00:20:22.316 "read": true, 00:20:22.316 "write": true, 00:20:22.316 "unmap": false, 00:20:22.316 "flush": false, 00:20:22.316 "reset": true, 00:20:22.316 "nvme_admin": false, 00:20:22.316 "nvme_io": false, 00:20:22.316 "nvme_io_md": false, 00:20:22.316 "write_zeroes": true, 00:20:22.316 "zcopy": false, 00:20:22.316 "get_zone_info": false, 00:20:22.316 "zone_management": false, 00:20:22.316 "zone_append": false, 00:20:22.316 "compare": false, 00:20:22.316 "compare_and_write": false, 00:20:22.316 "abort": false, 00:20:22.316 "seek_hole": false, 00:20:22.316 "seek_data": false, 00:20:22.316 "copy": false, 00:20:22.316 "nvme_iov_md": false 00:20:22.316 }, 00:20:22.316 "memory_domains": [ 00:20:22.316 { 00:20:22.316 "dma_device_id": "system", 00:20:22.316 "dma_device_type": 1 00:20:22.316 }, 00:20:22.316 { 00:20:22.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.316 "dma_device_type": 2 00:20:22.316 }, 00:20:22.316 { 00:20:22.316 "dma_device_id": "system", 00:20:22.316 "dma_device_type": 1 00:20:22.316 }, 00:20:22.316 { 00:20:22.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.316 "dma_device_type": 2 00:20:22.316 } 00:20:22.316 ], 00:20:22.316 "driver_specific": { 00:20:22.316 "raid": { 00:20:22.316 "uuid": "90be4a77-1721-4055-a56e-1b2cdf0ba64d", 00:20:22.316 "strip_size_kb": 0, 00:20:22.316 "state": "online", 00:20:22.316 "raid_level": "raid1", 00:20:22.316 "superblock": true, 00:20:22.316 "num_base_bdevs": 2, 00:20:22.316 "num_base_bdevs_discovered": 2, 00:20:22.316 "num_base_bdevs_operational": 2, 00:20:22.316 "base_bdevs_list": [ 00:20:22.316 { 00:20:22.316 "name": "pt1", 00:20:22.316 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:22.316 "is_configured": true, 00:20:22.316 "data_offset": 256, 00:20:22.316 "data_size": 7936 00:20:22.316 }, 00:20:22.316 { 00:20:22.316 "name": "pt2", 00:20:22.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:22.316 "is_configured": true, 00:20:22.316 "data_offset": 256, 00:20:22.316 "data_size": 7936 00:20:22.316 } 00:20:22.316 ] 00:20:22.316 } 00:20:22.316 } 00:20:22.316 }' 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:22.316 pt2' 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.316 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.317 [2024-12-06 15:47:05.549633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 90be4a77-1721-4055-a56e-1b2cdf0ba64d '!=' 90be4a77-1721-4055-a56e-1b2cdf0ba64d ']' 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.317 [2024-12-06 15:47:05.593361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.317 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.574 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.574 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.574 "name": "raid_bdev1", 00:20:22.574 "uuid": "90be4a77-1721-4055-a56e-1b2cdf0ba64d", 00:20:22.574 "strip_size_kb": 0, 00:20:22.574 "state": "online", 00:20:22.574 "raid_level": "raid1", 00:20:22.574 "superblock": true, 00:20:22.574 "num_base_bdevs": 2, 00:20:22.574 "num_base_bdevs_discovered": 1, 00:20:22.574 "num_base_bdevs_operational": 1, 00:20:22.574 "base_bdevs_list": [ 00:20:22.574 { 00:20:22.574 "name": null, 00:20:22.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.574 "is_configured": false, 00:20:22.574 "data_offset": 0, 00:20:22.574 "data_size": 7936 00:20:22.574 }, 00:20:22.574 { 00:20:22.574 "name": "pt2", 00:20:22.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:22.574 "is_configured": true, 00:20:22.574 "data_offset": 256, 00:20:22.574 "data_size": 7936 00:20:22.574 } 00:20:22.574 ] 00:20:22.574 }' 00:20:22.574 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.574 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.831 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:22.831 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.831 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.831 [2024-12-06 15:47:05.960808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.831 [2024-12-06 15:47:05.960838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.831 [2024-12-06 15:47:05.960911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.831 [2024-12-06 15:47:05.960961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.831 [2024-12-06 15:47:05.960976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:22.831 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.831 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:22.831 15:47:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.831 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.831 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.831 15:47:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.831 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.832 [2024-12-06 15:47:06.032699] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:22.832 [2024-12-06 15:47:06.032758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.832 [2024-12-06 15:47:06.032795] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:22.832 [2024-12-06 15:47:06.032810] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.832 [2024-12-06 15:47:06.035651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.832 [2024-12-06 15:47:06.035693] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:22.832 [2024-12-06 15:47:06.035780] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:22.832 [2024-12-06 15:47:06.035834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:22.832 [2024-12-06 15:47:06.035946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:22.832 [2024-12-06 15:47:06.035961] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:22.832 [2024-12-06 15:47:06.036223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:22.832 [2024-12-06 15:47:06.036388] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:22.832 [2024-12-06 15:47:06.036398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:22.832 [2024-12-06 15:47:06.036566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.832 pt2 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.832 "name": "raid_bdev1", 00:20:22.832 "uuid": "90be4a77-1721-4055-a56e-1b2cdf0ba64d", 00:20:22.832 "strip_size_kb": 0, 00:20:22.832 "state": "online", 00:20:22.832 "raid_level": "raid1", 00:20:22.832 "superblock": true, 00:20:22.832 "num_base_bdevs": 2, 00:20:22.832 "num_base_bdevs_discovered": 1, 00:20:22.832 "num_base_bdevs_operational": 1, 00:20:22.832 "base_bdevs_list": [ 00:20:22.832 { 00:20:22.832 "name": null, 00:20:22.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.832 "is_configured": false, 00:20:22.832 "data_offset": 256, 00:20:22.832 "data_size": 7936 00:20:22.832 }, 00:20:22.832 { 00:20:22.832 "name": "pt2", 00:20:22.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:22.832 "is_configured": true, 00:20:22.832 "data_offset": 256, 00:20:22.832 "data_size": 7936 00:20:22.832 } 00:20:22.832 ] 00:20:22.832 }' 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.832 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.398 [2024-12-06 15:47:06.424161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:23.398 [2024-12-06 15:47:06.424198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:23.398 [2024-12-06 15:47:06.424273] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.398 [2024-12-06 15:47:06.424330] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.398 [2024-12-06 15:47:06.424341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.398 [2024-12-06 15:47:06.480103] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:23.398 [2024-12-06 15:47:06.480167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.398 [2024-12-06 15:47:06.480192] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:23.398 [2024-12-06 15:47:06.480204] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.398 [2024-12-06 15:47:06.482997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.398 [2024-12-06 15:47:06.483039] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:23.398 [2024-12-06 15:47:06.483130] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:23.398 [2024-12-06 15:47:06.483180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:23.398 [2024-12-06 15:47:06.483341] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:23.398 [2024-12-06 15:47:06.483354] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:23.398 [2024-12-06 15:47:06.483372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:23.398 [2024-12-06 15:47:06.483433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:23.398 [2024-12-06 15:47:06.483527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:23.398 [2024-12-06 15:47:06.483538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:23.398 [2024-12-06 15:47:06.483818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:23.398 [2024-12-06 15:47:06.483963] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:23.398 [2024-12-06 15:47:06.483978] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:23.398 [2024-12-06 15:47:06.484165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.398 pt1 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.398 "name": "raid_bdev1", 00:20:23.398 "uuid": "90be4a77-1721-4055-a56e-1b2cdf0ba64d", 00:20:23.398 "strip_size_kb": 0, 00:20:23.398 "state": "online", 00:20:23.398 "raid_level": "raid1", 00:20:23.398 "superblock": true, 00:20:23.398 "num_base_bdevs": 2, 00:20:23.398 "num_base_bdevs_discovered": 1, 00:20:23.398 "num_base_bdevs_operational": 1, 00:20:23.398 "base_bdevs_list": [ 00:20:23.398 { 00:20:23.398 "name": null, 00:20:23.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.398 "is_configured": false, 00:20:23.398 "data_offset": 256, 00:20:23.398 "data_size": 7936 00:20:23.398 }, 00:20:23.398 { 00:20:23.398 "name": "pt2", 00:20:23.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:23.398 "is_configured": true, 00:20:23.398 "data_offset": 256, 00:20:23.398 "data_size": 7936 00:20:23.398 } 00:20:23.398 ] 00:20:23.398 }' 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.398 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.656 [2024-12-06 15:47:06.911723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 90be4a77-1721-4055-a56e-1b2cdf0ba64d '!=' 90be4a77-1721-4055-a56e-1b2cdf0ba64d ']' 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86180 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86180 ']' 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86180 00:20:23.656 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:20:23.915 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.915 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86180 00:20:23.915 killing process with pid 86180 00:20:23.915 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:23.915 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:23.915 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86180' 00:20:23.915 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86180 00:20:23.915 [2024-12-06 15:47:06.990959] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:23.915 [2024-12-06 15:47:06.991052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.915 15:47:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86180 00:20:23.915 [2024-12-06 15:47:06.991103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.915 [2024-12-06 15:47:06.991122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:23.915 [2024-12-06 15:47:07.207461] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:25.293 15:47:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:20:25.293 00:20:25.293 real 0m5.873s 00:20:25.293 user 0m8.582s 00:20:25.293 sys 0m1.332s 00:20:25.293 15:47:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.293 15:47:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:25.293 ************************************ 00:20:25.293 END TEST raid_superblock_test_4k 00:20:25.293 ************************************ 00:20:25.293 15:47:08 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:20:25.293 15:47:08 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:20:25.293 15:47:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:25.293 15:47:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.293 15:47:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:25.293 ************************************ 00:20:25.293 START TEST raid_rebuild_test_sb_4k 00:20:25.293 ************************************ 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:25.293 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:25.294 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:25.294 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:25.294 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:25.294 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86503 00:20:25.294 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86503 00:20:25.294 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:25.294 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86503 ']' 00:20:25.294 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.294 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.294 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.294 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.294 15:47:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:25.552 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:25.552 Zero copy mechanism will not be used. 00:20:25.552 [2024-12-06 15:47:08.596305] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:20:25.552 [2024-12-06 15:47:08.596468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86503 ] 00:20:25.552 [2024-12-06 15:47:08.780997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.811 [2024-12-06 15:47:08.921566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.069 [2024-12-06 15:47:09.157011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:26.069 [2024-12-06 15:47:09.157085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:26.328 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.328 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.329 BaseBdev1_malloc 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.329 [2024-12-06 15:47:09.472901] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:26.329 [2024-12-06 15:47:09.472974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.329 [2024-12-06 15:47:09.473002] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:26.329 [2024-12-06 15:47:09.473018] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.329 [2024-12-06 15:47:09.475722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.329 [2024-12-06 15:47:09.475763] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:26.329 BaseBdev1 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.329 BaseBdev2_malloc 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.329 [2024-12-06 15:47:09.536782] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:26.329 [2024-12-06 15:47:09.536846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.329 [2024-12-06 15:47:09.536890] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:26.329 [2024-12-06 15:47:09.536906] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.329 [2024-12-06 15:47:09.539613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.329 [2024-12-06 15:47:09.539653] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:26.329 BaseBdev2 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.329 spare_malloc 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.329 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.589 spare_delay 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.589 [2024-12-06 15:47:09.636357] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:26.589 [2024-12-06 15:47:09.636420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.589 [2024-12-06 15:47:09.636442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:26.589 [2024-12-06 15:47:09.636457] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.589 [2024-12-06 15:47:09.639172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.589 [2024-12-06 15:47:09.639219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:26.589 spare 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.589 [2024-12-06 15:47:09.648416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:26.589 [2024-12-06 15:47:09.650784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:26.589 [2024-12-06 15:47:09.650996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:26.589 [2024-12-06 15:47:09.651012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:26.589 [2024-12-06 15:47:09.651283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:26.589 [2024-12-06 15:47:09.651472] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:26.589 [2024-12-06 15:47:09.651492] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:26.589 [2024-12-06 15:47:09.651660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.589 "name": "raid_bdev1", 00:20:26.589 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:26.589 "strip_size_kb": 0, 00:20:26.589 "state": "online", 00:20:26.589 "raid_level": "raid1", 00:20:26.589 "superblock": true, 00:20:26.589 "num_base_bdevs": 2, 00:20:26.589 "num_base_bdevs_discovered": 2, 00:20:26.589 "num_base_bdevs_operational": 2, 00:20:26.589 "base_bdevs_list": [ 00:20:26.589 { 00:20:26.589 "name": "BaseBdev1", 00:20:26.589 "uuid": "2634f80f-470f-5f8d-90ef-de6caaae80eb", 00:20:26.589 "is_configured": true, 00:20:26.589 "data_offset": 256, 00:20:26.589 "data_size": 7936 00:20:26.589 }, 00:20:26.589 { 00:20:26.589 "name": "BaseBdev2", 00:20:26.589 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:26.589 "is_configured": true, 00:20:26.589 "data_offset": 256, 00:20:26.589 "data_size": 7936 00:20:26.589 } 00:20:26.589 ] 00:20:26.589 }' 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.589 15:47:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.849 [2024-12-06 15:47:10.028150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:26.849 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:27.108 [2024-12-06 15:47:10.295649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:27.108 /dev/nbd0 00:20:27.108 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:27.108 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:27.108 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:27.108 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:20:27.108 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:27.108 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:27.109 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:27.109 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:20:27.109 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:27.109 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:27.109 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:27.109 1+0 records in 00:20:27.109 1+0 records out 00:20:27.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372154 s, 11.0 MB/s 00:20:27.109 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:27.109 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:20:27.109 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:27.109 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:27.109 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:20:27.109 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:27.109 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:27.109 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:27.109 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:27.109 15:47:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:28.045 7936+0 records in 00:20:28.045 7936+0 records out 00:20:28.045 32505856 bytes (33 MB, 31 MiB) copied, 0.689879 s, 47.1 MB/s 00:20:28.045 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:28.045 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:28.045 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:28.045 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:28.045 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:28.045 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:28.045 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:28.045 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:28.045 [2024-12-06 15:47:11.289617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.045 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:28.045 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:28.045 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.046 [2024-12-06 15:47:11.309710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.046 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.305 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.305 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.305 "name": "raid_bdev1", 00:20:28.305 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:28.305 "strip_size_kb": 0, 00:20:28.305 "state": "online", 00:20:28.305 "raid_level": "raid1", 00:20:28.305 "superblock": true, 00:20:28.305 "num_base_bdevs": 2, 00:20:28.305 "num_base_bdevs_discovered": 1, 00:20:28.305 "num_base_bdevs_operational": 1, 00:20:28.305 "base_bdevs_list": [ 00:20:28.305 { 00:20:28.305 "name": null, 00:20:28.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.306 "is_configured": false, 00:20:28.306 "data_offset": 0, 00:20:28.306 "data_size": 7936 00:20:28.306 }, 00:20:28.306 { 00:20:28.306 "name": "BaseBdev2", 00:20:28.306 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:28.306 "is_configured": true, 00:20:28.306 "data_offset": 256, 00:20:28.306 "data_size": 7936 00:20:28.306 } 00:20:28.306 ] 00:20:28.306 }' 00:20:28.306 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.306 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.565 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:28.565 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.565 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.565 [2024-12-06 15:47:11.741213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:28.565 [2024-12-06 15:47:11.758763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:28.565 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.565 15:47:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:28.565 [2024-12-06 15:47:11.761203] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:29.503 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.503 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.503 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:29.503 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:29.503 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.503 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.503 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.503 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.503 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:29.762 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.762 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.762 "name": "raid_bdev1", 00:20:29.762 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:29.762 "strip_size_kb": 0, 00:20:29.762 "state": "online", 00:20:29.762 "raid_level": "raid1", 00:20:29.762 "superblock": true, 00:20:29.762 "num_base_bdevs": 2, 00:20:29.762 "num_base_bdevs_discovered": 2, 00:20:29.762 "num_base_bdevs_operational": 2, 00:20:29.762 "process": { 00:20:29.762 "type": "rebuild", 00:20:29.762 "target": "spare", 00:20:29.762 "progress": { 00:20:29.762 "blocks": 2560, 00:20:29.762 "percent": 32 00:20:29.762 } 00:20:29.762 }, 00:20:29.762 "base_bdevs_list": [ 00:20:29.762 { 00:20:29.762 "name": "spare", 00:20:29.762 "uuid": "1854d65c-66b1-5baa-8fa1-0cbf3a5c64b5", 00:20:29.762 "is_configured": true, 00:20:29.762 "data_offset": 256, 00:20:29.762 "data_size": 7936 00:20:29.762 }, 00:20:29.762 { 00:20:29.762 "name": "BaseBdev2", 00:20:29.762 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:29.762 "is_configured": true, 00:20:29.762 "data_offset": 256, 00:20:29.762 "data_size": 7936 00:20:29.762 } 00:20:29.762 ] 00:20:29.762 }' 00:20:29.762 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.762 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:29.762 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:29.762 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:29.762 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:29.762 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.762 15:47:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:29.762 [2024-12-06 15:47:12.912597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:29.762 [2024-12-06 15:47:12.970037] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:29.762 [2024-12-06 15:47:12.970116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.762 [2024-12-06 15:47:12.970135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:29.762 [2024-12-06 15:47:12.970148] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:29.762 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.762 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:29.762 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:29.762 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:29.762 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:29.762 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:29.762 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:29.762 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.762 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.763 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.763 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.763 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.763 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.763 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.763 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:29.763 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.021 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.021 "name": "raid_bdev1", 00:20:30.021 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:30.021 "strip_size_kb": 0, 00:20:30.021 "state": "online", 00:20:30.021 "raid_level": "raid1", 00:20:30.021 "superblock": true, 00:20:30.021 "num_base_bdevs": 2, 00:20:30.021 "num_base_bdevs_discovered": 1, 00:20:30.021 "num_base_bdevs_operational": 1, 00:20:30.021 "base_bdevs_list": [ 00:20:30.021 { 00:20:30.021 "name": null, 00:20:30.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.021 "is_configured": false, 00:20:30.021 "data_offset": 0, 00:20:30.021 "data_size": 7936 00:20:30.021 }, 00:20:30.021 { 00:20:30.021 "name": "BaseBdev2", 00:20:30.021 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:30.021 "is_configured": true, 00:20:30.021 "data_offset": 256, 00:20:30.021 "data_size": 7936 00:20:30.021 } 00:20:30.021 ] 00:20:30.021 }' 00:20:30.021 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.021 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.281 "name": "raid_bdev1", 00:20:30.281 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:30.281 "strip_size_kb": 0, 00:20:30.281 "state": "online", 00:20:30.281 "raid_level": "raid1", 00:20:30.281 "superblock": true, 00:20:30.281 "num_base_bdevs": 2, 00:20:30.281 "num_base_bdevs_discovered": 1, 00:20:30.281 "num_base_bdevs_operational": 1, 00:20:30.281 "base_bdevs_list": [ 00:20:30.281 { 00:20:30.281 "name": null, 00:20:30.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.281 "is_configured": false, 00:20:30.281 "data_offset": 0, 00:20:30.281 "data_size": 7936 00:20:30.281 }, 00:20:30.281 { 00:20:30.281 "name": "BaseBdev2", 00:20:30.281 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:30.281 "is_configured": true, 00:20:30.281 "data_offset": 256, 00:20:30.281 "data_size": 7936 00:20:30.281 } 00:20:30.281 ] 00:20:30.281 }' 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.281 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.281 [2024-12-06 15:47:13.568986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:30.541 [2024-12-06 15:47:13.587120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:30.541 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.541 15:47:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:30.541 [2024-12-06 15:47:13.589583] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.479 "name": "raid_bdev1", 00:20:31.479 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:31.479 "strip_size_kb": 0, 00:20:31.479 "state": "online", 00:20:31.479 "raid_level": "raid1", 00:20:31.479 "superblock": true, 00:20:31.479 "num_base_bdevs": 2, 00:20:31.479 "num_base_bdevs_discovered": 2, 00:20:31.479 "num_base_bdevs_operational": 2, 00:20:31.479 "process": { 00:20:31.479 "type": "rebuild", 00:20:31.479 "target": "spare", 00:20:31.479 "progress": { 00:20:31.479 "blocks": 2560, 00:20:31.479 "percent": 32 00:20:31.479 } 00:20:31.479 }, 00:20:31.479 "base_bdevs_list": [ 00:20:31.479 { 00:20:31.479 "name": "spare", 00:20:31.479 "uuid": "1854d65c-66b1-5baa-8fa1-0cbf3a5c64b5", 00:20:31.479 "is_configured": true, 00:20:31.479 "data_offset": 256, 00:20:31.479 "data_size": 7936 00:20:31.479 }, 00:20:31.479 { 00:20:31.479 "name": "BaseBdev2", 00:20:31.479 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:31.479 "is_configured": true, 00:20:31.479 "data_offset": 256, 00:20:31.479 "data_size": 7936 00:20:31.479 } 00:20:31.479 ] 00:20:31.479 }' 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:31.479 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=682 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:31.479 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.739 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.739 "name": "raid_bdev1", 00:20:31.739 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:31.739 "strip_size_kb": 0, 00:20:31.739 "state": "online", 00:20:31.740 "raid_level": "raid1", 00:20:31.740 "superblock": true, 00:20:31.740 "num_base_bdevs": 2, 00:20:31.740 "num_base_bdevs_discovered": 2, 00:20:31.740 "num_base_bdevs_operational": 2, 00:20:31.740 "process": { 00:20:31.740 "type": "rebuild", 00:20:31.740 "target": "spare", 00:20:31.740 "progress": { 00:20:31.740 "blocks": 2816, 00:20:31.740 "percent": 35 00:20:31.740 } 00:20:31.740 }, 00:20:31.740 "base_bdevs_list": [ 00:20:31.740 { 00:20:31.740 "name": "spare", 00:20:31.740 "uuid": "1854d65c-66b1-5baa-8fa1-0cbf3a5c64b5", 00:20:31.740 "is_configured": true, 00:20:31.740 "data_offset": 256, 00:20:31.740 "data_size": 7936 00:20:31.740 }, 00:20:31.740 { 00:20:31.740 "name": "BaseBdev2", 00:20:31.740 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:31.740 "is_configured": true, 00:20:31.740 "data_offset": 256, 00:20:31.740 "data_size": 7936 00:20:31.740 } 00:20:31.740 ] 00:20:31.740 }' 00:20:31.740 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.740 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.740 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.740 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.740 15:47:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:32.696 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:32.696 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.696 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.696 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:32.696 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:32.696 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.696 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.696 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.696 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.696 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:32.696 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.696 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.696 "name": "raid_bdev1", 00:20:32.696 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:32.696 "strip_size_kb": 0, 00:20:32.696 "state": "online", 00:20:32.696 "raid_level": "raid1", 00:20:32.696 "superblock": true, 00:20:32.696 "num_base_bdevs": 2, 00:20:32.696 "num_base_bdevs_discovered": 2, 00:20:32.696 "num_base_bdevs_operational": 2, 00:20:32.696 "process": { 00:20:32.696 "type": "rebuild", 00:20:32.696 "target": "spare", 00:20:32.696 "progress": { 00:20:32.697 "blocks": 5632, 00:20:32.697 "percent": 70 00:20:32.697 } 00:20:32.697 }, 00:20:32.697 "base_bdevs_list": [ 00:20:32.697 { 00:20:32.697 "name": "spare", 00:20:32.697 "uuid": "1854d65c-66b1-5baa-8fa1-0cbf3a5c64b5", 00:20:32.697 "is_configured": true, 00:20:32.697 "data_offset": 256, 00:20:32.697 "data_size": 7936 00:20:32.697 }, 00:20:32.697 { 00:20:32.697 "name": "BaseBdev2", 00:20:32.697 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:32.697 "is_configured": true, 00:20:32.697 "data_offset": 256, 00:20:32.697 "data_size": 7936 00:20:32.697 } 00:20:32.697 ] 00:20:32.697 }' 00:20:32.697 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.697 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.697 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.697 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.697 15:47:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:33.633 [2024-12-06 15:47:16.711971] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:33.633 [2024-12-06 15:47:16.712075] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:33.633 [2024-12-06 15:47:16.712196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.893 15:47:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:33.893 15:47:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:33.893 15:47:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:33.893 15:47:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:33.893 15:47:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:33.893 15:47:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:33.893 15:47:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.893 15:47:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.893 15:47:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.893 15:47:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:33.893 "name": "raid_bdev1", 00:20:33.893 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:33.893 "strip_size_kb": 0, 00:20:33.893 "state": "online", 00:20:33.893 "raid_level": "raid1", 00:20:33.893 "superblock": true, 00:20:33.893 "num_base_bdevs": 2, 00:20:33.893 "num_base_bdevs_discovered": 2, 00:20:33.893 "num_base_bdevs_operational": 2, 00:20:33.893 "base_bdevs_list": [ 00:20:33.893 { 00:20:33.893 "name": "spare", 00:20:33.893 "uuid": "1854d65c-66b1-5baa-8fa1-0cbf3a5c64b5", 00:20:33.893 "is_configured": true, 00:20:33.893 "data_offset": 256, 00:20:33.893 "data_size": 7936 00:20:33.893 }, 00:20:33.893 { 00:20:33.893 "name": "BaseBdev2", 00:20:33.893 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:33.893 "is_configured": true, 00:20:33.893 "data_offset": 256, 00:20:33.893 "data_size": 7936 00:20:33.893 } 00:20:33.893 ] 00:20:33.893 }' 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:33.893 "name": "raid_bdev1", 00:20:33.893 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:33.893 "strip_size_kb": 0, 00:20:33.893 "state": "online", 00:20:33.893 "raid_level": "raid1", 00:20:33.893 "superblock": true, 00:20:33.893 "num_base_bdevs": 2, 00:20:33.893 "num_base_bdevs_discovered": 2, 00:20:33.893 "num_base_bdevs_operational": 2, 00:20:33.893 "base_bdevs_list": [ 00:20:33.893 { 00:20:33.893 "name": "spare", 00:20:33.893 "uuid": "1854d65c-66b1-5baa-8fa1-0cbf3a5c64b5", 00:20:33.893 "is_configured": true, 00:20:33.893 "data_offset": 256, 00:20:33.893 "data_size": 7936 00:20:33.893 }, 00:20:33.893 { 00:20:33.893 "name": "BaseBdev2", 00:20:33.893 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:33.893 "is_configured": true, 00:20:33.893 "data_offset": 256, 00:20:33.893 "data_size": 7936 00:20:33.893 } 00:20:33.893 ] 00:20:33.893 }' 00:20:33.893 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.153 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:34.153 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.153 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.154 "name": "raid_bdev1", 00:20:34.154 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:34.154 "strip_size_kb": 0, 00:20:34.154 "state": "online", 00:20:34.154 "raid_level": "raid1", 00:20:34.154 "superblock": true, 00:20:34.154 "num_base_bdevs": 2, 00:20:34.154 "num_base_bdevs_discovered": 2, 00:20:34.154 "num_base_bdevs_operational": 2, 00:20:34.154 "base_bdevs_list": [ 00:20:34.154 { 00:20:34.154 "name": "spare", 00:20:34.154 "uuid": "1854d65c-66b1-5baa-8fa1-0cbf3a5c64b5", 00:20:34.154 "is_configured": true, 00:20:34.154 "data_offset": 256, 00:20:34.154 "data_size": 7936 00:20:34.154 }, 00:20:34.154 { 00:20:34.154 "name": "BaseBdev2", 00:20:34.154 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:34.154 "is_configured": true, 00:20:34.154 "data_offset": 256, 00:20:34.154 "data_size": 7936 00:20:34.154 } 00:20:34.154 ] 00:20:34.154 }' 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.154 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.414 [2024-12-06 15:47:17.633380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:34.414 [2024-12-06 15:47:17.633425] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.414 [2024-12-06 15:47:17.633541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.414 [2024-12-06 15:47:17.633627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:34.414 [2024-12-06 15:47:17.633643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:34.414 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:34.673 /dev/nbd0 00:20:34.673 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:34.673 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:34.673 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:34.673 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:20:34.673 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:34.673 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:34.674 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:34.674 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:20:34.674 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:34.674 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:34.674 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:34.674 1+0 records in 00:20:34.674 1+0 records out 00:20:34.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336939 s, 12.2 MB/s 00:20:34.674 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.674 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:20:34.674 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.674 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:34.674 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:20:34.674 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:34.674 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:34.674 15:47:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:34.934 /dev/nbd1 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:34.934 1+0 records in 00:20:34.934 1+0 records out 00:20:34.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337388 s, 12.1 MB/s 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:34.934 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:35.194 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:35.194 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:35.194 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:35.194 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:35.194 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:35.194 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:35.194 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:35.453 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:35.453 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:35.453 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:35.453 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:35.453 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:35.453 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:35.453 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:35.453 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:35.453 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:35.453 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:35.713 [2024-12-06 15:47:18.872531] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:35.713 [2024-12-06 15:47:18.872750] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.713 [2024-12-06 15:47:18.872797] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:35.713 [2024-12-06 15:47:18.872811] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.713 [2024-12-06 15:47:18.875672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.713 [2024-12-06 15:47:18.875711] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:35.713 [2024-12-06 15:47:18.875823] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:35.713 [2024-12-06 15:47:18.875885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:35.713 [2024-12-06 15:47:18.876070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:35.713 spare 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.713 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:35.713 [2024-12-06 15:47:18.976010] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:35.713 [2024-12-06 15:47:18.976040] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:35.714 [2024-12-06 15:47:18.976345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:35.714 [2024-12-06 15:47:18.976572] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:35.714 [2024-12-06 15:47:18.976585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:35.714 [2024-12-06 15:47:18.976790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.714 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.714 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:35.714 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.714 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.714 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.714 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.714 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:35.714 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.714 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.714 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.714 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.714 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.714 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.714 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.714 15:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:35.973 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.973 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.973 "name": "raid_bdev1", 00:20:35.973 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:35.973 "strip_size_kb": 0, 00:20:35.973 "state": "online", 00:20:35.973 "raid_level": "raid1", 00:20:35.973 "superblock": true, 00:20:35.973 "num_base_bdevs": 2, 00:20:35.973 "num_base_bdevs_discovered": 2, 00:20:35.973 "num_base_bdevs_operational": 2, 00:20:35.973 "base_bdevs_list": [ 00:20:35.973 { 00:20:35.973 "name": "spare", 00:20:35.973 "uuid": "1854d65c-66b1-5baa-8fa1-0cbf3a5c64b5", 00:20:35.973 "is_configured": true, 00:20:35.973 "data_offset": 256, 00:20:35.973 "data_size": 7936 00:20:35.973 }, 00:20:35.973 { 00:20:35.973 "name": "BaseBdev2", 00:20:35.973 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:35.973 "is_configured": true, 00:20:35.973 "data_offset": 256, 00:20:35.973 "data_size": 7936 00:20:35.973 } 00:20:35.973 ] 00:20:35.973 }' 00:20:35.973 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.973 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.232 "name": "raid_bdev1", 00:20:36.232 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:36.232 "strip_size_kb": 0, 00:20:36.232 "state": "online", 00:20:36.232 "raid_level": "raid1", 00:20:36.232 "superblock": true, 00:20:36.232 "num_base_bdevs": 2, 00:20:36.232 "num_base_bdevs_discovered": 2, 00:20:36.232 "num_base_bdevs_operational": 2, 00:20:36.232 "base_bdevs_list": [ 00:20:36.232 { 00:20:36.232 "name": "spare", 00:20:36.232 "uuid": "1854d65c-66b1-5baa-8fa1-0cbf3a5c64b5", 00:20:36.232 "is_configured": true, 00:20:36.232 "data_offset": 256, 00:20:36.232 "data_size": 7936 00:20:36.232 }, 00:20:36.232 { 00:20:36.232 "name": "BaseBdev2", 00:20:36.232 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:36.232 "is_configured": true, 00:20:36.232 "data_offset": 256, 00:20:36.232 "data_size": 7936 00:20:36.232 } 00:20:36.232 ] 00:20:36.232 }' 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.232 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.491 [2024-12-06 15:47:19.531986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.491 "name": "raid_bdev1", 00:20:36.491 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:36.491 "strip_size_kb": 0, 00:20:36.491 "state": "online", 00:20:36.491 "raid_level": "raid1", 00:20:36.491 "superblock": true, 00:20:36.491 "num_base_bdevs": 2, 00:20:36.491 "num_base_bdevs_discovered": 1, 00:20:36.491 "num_base_bdevs_operational": 1, 00:20:36.491 "base_bdevs_list": [ 00:20:36.491 { 00:20:36.491 "name": null, 00:20:36.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.491 "is_configured": false, 00:20:36.491 "data_offset": 0, 00:20:36.491 "data_size": 7936 00:20:36.491 }, 00:20:36.491 { 00:20:36.491 "name": "BaseBdev2", 00:20:36.491 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:36.491 "is_configured": true, 00:20:36.491 "data_offset": 256, 00:20:36.491 "data_size": 7936 00:20:36.491 } 00:20:36.491 ] 00:20:36.491 }' 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.491 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.749 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:36.749 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.749 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.749 [2024-12-06 15:47:19.931506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:36.749 [2024-12-06 15:47:19.931754] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:36.749 [2024-12-06 15:47:19.931774] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:36.749 [2024-12-06 15:47:19.931818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:36.749 [2024-12-06 15:47:19.950003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:36.749 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.749 15:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:36.749 [2024-12-06 15:47:19.952617] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:37.685 15:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.685 15:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.685 15:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:37.685 15:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:37.685 15:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.685 15:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.685 15:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.685 15:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.685 15:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:37.945 15:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.945 "name": "raid_bdev1", 00:20:37.945 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:37.945 "strip_size_kb": 0, 00:20:37.945 "state": "online", 00:20:37.945 "raid_level": "raid1", 00:20:37.945 "superblock": true, 00:20:37.945 "num_base_bdevs": 2, 00:20:37.945 "num_base_bdevs_discovered": 2, 00:20:37.945 "num_base_bdevs_operational": 2, 00:20:37.945 "process": { 00:20:37.945 "type": "rebuild", 00:20:37.945 "target": "spare", 00:20:37.945 "progress": { 00:20:37.945 "blocks": 2560, 00:20:37.945 "percent": 32 00:20:37.945 } 00:20:37.945 }, 00:20:37.945 "base_bdevs_list": [ 00:20:37.945 { 00:20:37.945 "name": "spare", 00:20:37.945 "uuid": "1854d65c-66b1-5baa-8fa1-0cbf3a5c64b5", 00:20:37.945 "is_configured": true, 00:20:37.945 "data_offset": 256, 00:20:37.945 "data_size": 7936 00:20:37.945 }, 00:20:37.945 { 00:20:37.945 "name": "BaseBdev2", 00:20:37.945 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:37.945 "is_configured": true, 00:20:37.945 "data_offset": 256, 00:20:37.945 "data_size": 7936 00:20:37.945 } 00:20:37.945 ] 00:20:37.945 }' 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:37.945 [2024-12-06 15:47:21.087996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.945 [2024-12-06 15:47:21.161423] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:37.945 [2024-12-06 15:47:21.161493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.945 [2024-12-06 15:47:21.161523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.945 [2024-12-06 15:47:21.161552] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:37.945 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.205 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.205 "name": "raid_bdev1", 00:20:38.205 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:38.205 "strip_size_kb": 0, 00:20:38.205 "state": "online", 00:20:38.205 "raid_level": "raid1", 00:20:38.205 "superblock": true, 00:20:38.205 "num_base_bdevs": 2, 00:20:38.205 "num_base_bdevs_discovered": 1, 00:20:38.205 "num_base_bdevs_operational": 1, 00:20:38.205 "base_bdevs_list": [ 00:20:38.205 { 00:20:38.205 "name": null, 00:20:38.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.205 "is_configured": false, 00:20:38.205 "data_offset": 0, 00:20:38.205 "data_size": 7936 00:20:38.205 }, 00:20:38.205 { 00:20:38.205 "name": "BaseBdev2", 00:20:38.205 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:38.205 "is_configured": true, 00:20:38.205 "data_offset": 256, 00:20:38.205 "data_size": 7936 00:20:38.205 } 00:20:38.205 ] 00:20:38.205 }' 00:20:38.205 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.205 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:38.465 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:38.465 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.465 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:38.465 [2024-12-06 15:47:21.578028] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:38.465 [2024-12-06 15:47:21.578233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.465 [2024-12-06 15:47:21.578269] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:38.465 [2024-12-06 15:47:21.578285] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.465 [2024-12-06 15:47:21.578875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.465 [2024-12-06 15:47:21.578902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:38.465 [2024-12-06 15:47:21.579008] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:38.465 [2024-12-06 15:47:21.579026] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:38.465 [2024-12-06 15:47:21.579038] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:38.465 [2024-12-06 15:47:21.579066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:38.465 [2024-12-06 15:47:21.596352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:38.465 spare 00:20:38.465 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.465 15:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:38.465 [2024-12-06 15:47:21.598790] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:39.403 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.403 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.403 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:39.403 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:39.403 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.404 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.404 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.404 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.404 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:39.404 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.404 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.404 "name": "raid_bdev1", 00:20:39.404 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:39.404 "strip_size_kb": 0, 00:20:39.404 "state": "online", 00:20:39.404 "raid_level": "raid1", 00:20:39.404 "superblock": true, 00:20:39.404 "num_base_bdevs": 2, 00:20:39.404 "num_base_bdevs_discovered": 2, 00:20:39.404 "num_base_bdevs_operational": 2, 00:20:39.404 "process": { 00:20:39.404 "type": "rebuild", 00:20:39.404 "target": "spare", 00:20:39.404 "progress": { 00:20:39.404 "blocks": 2560, 00:20:39.404 "percent": 32 00:20:39.404 } 00:20:39.404 }, 00:20:39.404 "base_bdevs_list": [ 00:20:39.404 { 00:20:39.404 "name": "spare", 00:20:39.404 "uuid": "1854d65c-66b1-5baa-8fa1-0cbf3a5c64b5", 00:20:39.404 "is_configured": true, 00:20:39.404 "data_offset": 256, 00:20:39.404 "data_size": 7936 00:20:39.404 }, 00:20:39.404 { 00:20:39.404 "name": "BaseBdev2", 00:20:39.404 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:39.404 "is_configured": true, 00:20:39.404 "data_offset": 256, 00:20:39.404 "data_size": 7936 00:20:39.404 } 00:20:39.404 ] 00:20:39.404 }' 00:20:39.404 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:39.663 [2024-12-06 15:47:22.746107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:39.663 [2024-12-06 15:47:22.807595] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:39.663 [2024-12-06 15:47:22.807660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.663 [2024-12-06 15:47:22.807681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:39.663 [2024-12-06 15:47:22.807690] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.663 "name": "raid_bdev1", 00:20:39.663 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:39.663 "strip_size_kb": 0, 00:20:39.663 "state": "online", 00:20:39.663 "raid_level": "raid1", 00:20:39.663 "superblock": true, 00:20:39.663 "num_base_bdevs": 2, 00:20:39.663 "num_base_bdevs_discovered": 1, 00:20:39.663 "num_base_bdevs_operational": 1, 00:20:39.663 "base_bdevs_list": [ 00:20:39.663 { 00:20:39.663 "name": null, 00:20:39.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.663 "is_configured": false, 00:20:39.663 "data_offset": 0, 00:20:39.663 "data_size": 7936 00:20:39.663 }, 00:20:39.663 { 00:20:39.663 "name": "BaseBdev2", 00:20:39.663 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:39.663 "is_configured": true, 00:20:39.663 "data_offset": 256, 00:20:39.663 "data_size": 7936 00:20:39.663 } 00:20:39.663 ] 00:20:39.663 }' 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.663 15:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.230 "name": "raid_bdev1", 00:20:40.230 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:40.230 "strip_size_kb": 0, 00:20:40.230 "state": "online", 00:20:40.230 "raid_level": "raid1", 00:20:40.230 "superblock": true, 00:20:40.230 "num_base_bdevs": 2, 00:20:40.230 "num_base_bdevs_discovered": 1, 00:20:40.230 "num_base_bdevs_operational": 1, 00:20:40.230 "base_bdevs_list": [ 00:20:40.230 { 00:20:40.230 "name": null, 00:20:40.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.230 "is_configured": false, 00:20:40.230 "data_offset": 0, 00:20:40.230 "data_size": 7936 00:20:40.230 }, 00:20:40.230 { 00:20:40.230 "name": "BaseBdev2", 00:20:40.230 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:40.230 "is_configured": true, 00:20:40.230 "data_offset": 256, 00:20:40.230 "data_size": 7936 00:20:40.230 } 00:20:40.230 ] 00:20:40.230 }' 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.230 [2024-12-06 15:47:23.372976] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:40.230 [2024-12-06 15:47:23.373044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.230 [2024-12-06 15:47:23.373081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:40.230 [2024-12-06 15:47:23.373106] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.230 [2024-12-06 15:47:23.373699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.230 [2024-12-06 15:47:23.373736] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:40.230 [2024-12-06 15:47:23.373830] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:40.230 [2024-12-06 15:47:23.373847] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:40.230 [2024-12-06 15:47:23.373863] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:40.230 [2024-12-06 15:47:23.373881] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:40.230 BaseBdev1 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.230 15:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:41.165 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:41.165 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.165 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.165 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.165 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.166 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:41.166 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.166 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.166 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.166 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.166 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.166 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.166 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.166 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.166 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.166 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.166 "name": "raid_bdev1", 00:20:41.166 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:41.166 "strip_size_kb": 0, 00:20:41.166 "state": "online", 00:20:41.166 "raid_level": "raid1", 00:20:41.166 "superblock": true, 00:20:41.166 "num_base_bdevs": 2, 00:20:41.166 "num_base_bdevs_discovered": 1, 00:20:41.166 "num_base_bdevs_operational": 1, 00:20:41.166 "base_bdevs_list": [ 00:20:41.166 { 00:20:41.166 "name": null, 00:20:41.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.166 "is_configured": false, 00:20:41.166 "data_offset": 0, 00:20:41.166 "data_size": 7936 00:20:41.166 }, 00:20:41.166 { 00:20:41.166 "name": "BaseBdev2", 00:20:41.166 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:41.166 "is_configured": true, 00:20:41.166 "data_offset": 256, 00:20:41.166 "data_size": 7936 00:20:41.166 } 00:20:41.166 ] 00:20:41.166 }' 00:20:41.166 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.166 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.733 "name": "raid_bdev1", 00:20:41.733 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:41.733 "strip_size_kb": 0, 00:20:41.733 "state": "online", 00:20:41.733 "raid_level": "raid1", 00:20:41.733 "superblock": true, 00:20:41.733 "num_base_bdevs": 2, 00:20:41.733 "num_base_bdevs_discovered": 1, 00:20:41.733 "num_base_bdevs_operational": 1, 00:20:41.733 "base_bdevs_list": [ 00:20:41.733 { 00:20:41.733 "name": null, 00:20:41.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.733 "is_configured": false, 00:20:41.733 "data_offset": 0, 00:20:41.733 "data_size": 7936 00:20:41.733 }, 00:20:41.733 { 00:20:41.733 "name": "BaseBdev2", 00:20:41.733 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:41.733 "is_configured": true, 00:20:41.733 "data_offset": 256, 00:20:41.733 "data_size": 7936 00:20:41.733 } 00:20:41.733 ] 00:20:41.733 }' 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:20:41.733 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:41.734 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:41.734 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.734 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:41.734 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.734 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:41.734 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.734 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.734 [2024-12-06 15:47:24.898931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:41.734 [2024-12-06 15:47:24.899144] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:41.734 [2024-12-06 15:47:24.899163] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:41.734 request: 00:20:41.734 { 00:20:41.734 "base_bdev": "BaseBdev1", 00:20:41.734 "raid_bdev": "raid_bdev1", 00:20:41.734 "method": "bdev_raid_add_base_bdev", 00:20:41.734 "req_id": 1 00:20:41.734 } 00:20:41.734 Got JSON-RPC error response 00:20:41.734 response: 00:20:41.734 { 00:20:41.734 "code": -22, 00:20:41.734 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:41.734 } 00:20:41.734 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:41.734 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:20:41.734 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:41.734 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:41.734 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:41.734 15:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:42.666 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:42.667 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.667 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.667 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:42.667 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:42.667 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:42.667 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.667 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.667 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.667 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.667 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.667 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.667 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.667 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.667 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.923 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.923 "name": "raid_bdev1", 00:20:42.923 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:42.923 "strip_size_kb": 0, 00:20:42.923 "state": "online", 00:20:42.923 "raid_level": "raid1", 00:20:42.923 "superblock": true, 00:20:42.923 "num_base_bdevs": 2, 00:20:42.923 "num_base_bdevs_discovered": 1, 00:20:42.923 "num_base_bdevs_operational": 1, 00:20:42.923 "base_bdevs_list": [ 00:20:42.923 { 00:20:42.923 "name": null, 00:20:42.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.923 "is_configured": false, 00:20:42.923 "data_offset": 0, 00:20:42.923 "data_size": 7936 00:20:42.923 }, 00:20:42.923 { 00:20:42.923 "name": "BaseBdev2", 00:20:42.923 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:42.923 "is_configured": true, 00:20:42.923 "data_offset": 256, 00:20:42.923 "data_size": 7936 00:20:42.923 } 00:20:42.923 ] 00:20:42.923 }' 00:20:42.923 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.923 15:47:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:43.181 "name": "raid_bdev1", 00:20:43.181 "uuid": "7ece081f-8f42-4b71-bc72-a41f97a2246d", 00:20:43.181 "strip_size_kb": 0, 00:20:43.181 "state": "online", 00:20:43.181 "raid_level": "raid1", 00:20:43.181 "superblock": true, 00:20:43.181 "num_base_bdevs": 2, 00:20:43.181 "num_base_bdevs_discovered": 1, 00:20:43.181 "num_base_bdevs_operational": 1, 00:20:43.181 "base_bdevs_list": [ 00:20:43.181 { 00:20:43.181 "name": null, 00:20:43.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.181 "is_configured": false, 00:20:43.181 "data_offset": 0, 00:20:43.181 "data_size": 7936 00:20:43.181 }, 00:20:43.181 { 00:20:43.181 "name": "BaseBdev2", 00:20:43.181 "uuid": "205b1d7a-69ec-5861-b4d0-4b5406e0a29b", 00:20:43.181 "is_configured": true, 00:20:43.181 "data_offset": 256, 00:20:43.181 "data_size": 7936 00:20:43.181 } 00:20:43.181 ] 00:20:43.181 }' 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86503 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86503 ']' 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86503 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86503 00:20:43.181 killing process with pid 86503 00:20:43.181 Received shutdown signal, test time was about 60.000000 seconds 00:20:43.181 00:20:43.181 Latency(us) 00:20:43.181 [2024-12-06T15:47:26.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.181 [2024-12-06T15:47:26.476Z] =================================================================================================================== 00:20:43.181 [2024-12-06T15:47:26.476Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86503' 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86503 00:20:43.181 [2024-12-06 15:47:26.461687] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:43.181 15:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86503 00:20:43.181 [2024-12-06 15:47:26.461860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.181 [2024-12-06 15:47:26.461923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.181 [2024-12-06 15:47:26.461938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:43.747 [2024-12-06 15:47:26.781472] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:44.685 15:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:20:44.685 00:20:44.685 real 0m19.483s 00:20:44.685 user 0m24.760s 00:20:44.685 sys 0m2.985s 00:20:44.685 15:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.685 15:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:44.685 ************************************ 00:20:44.685 END TEST raid_rebuild_test_sb_4k 00:20:44.685 ************************************ 00:20:44.945 15:47:28 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:20:44.945 15:47:28 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:20:44.945 15:47:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:44.945 15:47:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.945 15:47:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:44.945 ************************************ 00:20:44.945 START TEST raid_state_function_test_sb_md_separate 00:20:44.945 ************************************ 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:44.945 Process raid pid: 87188 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87188 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87188' 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87188 00:20:44.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87188 ']' 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.945 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.945 [2024-12-06 15:47:28.159734] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:20:44.945 [2024-12-06 15:47:28.160098] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.205 [2024-12-06 15:47:28.337822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.205 [2024-12-06 15:47:28.475099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.464 [2024-12-06 15:47:28.715541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:45.464 [2024-12-06 15:47:28.715788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:45.723 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.724 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:20:45.724 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:45.724 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.724 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.724 [2024-12-06 15:47:28.991931] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:45.724 [2024-12-06 15:47:28.992003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:45.724 [2024-12-06 15:47:28.992016] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:45.724 [2024-12-06 15:47:28.992030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:45.724 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.724 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:45.724 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:45.724 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:45.724 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.724 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.724 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:45.724 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.724 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.724 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.724 15:47:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.724 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.724 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.724 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.724 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.982 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.982 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.982 "name": "Existed_Raid", 00:20:45.982 "uuid": "bbb2fd12-3c1a-47fd-a75f-c9a504619554", 00:20:45.983 "strip_size_kb": 0, 00:20:45.983 "state": "configuring", 00:20:45.983 "raid_level": "raid1", 00:20:45.983 "superblock": true, 00:20:45.983 "num_base_bdevs": 2, 00:20:45.983 "num_base_bdevs_discovered": 0, 00:20:45.983 "num_base_bdevs_operational": 2, 00:20:45.983 "base_bdevs_list": [ 00:20:45.983 { 00:20:45.983 "name": "BaseBdev1", 00:20:45.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.983 "is_configured": false, 00:20:45.983 "data_offset": 0, 00:20:45.983 "data_size": 0 00:20:45.983 }, 00:20:45.983 { 00:20:45.983 "name": "BaseBdev2", 00:20:45.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.983 "is_configured": false, 00:20:45.983 "data_offset": 0, 00:20:45.983 "data_size": 0 00:20:45.983 } 00:20:45.983 ] 00:20:45.983 }' 00:20:45.983 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.983 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.243 [2024-12-06 15:47:29.343439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:46.243 [2024-12-06 15:47:29.343620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.243 [2024-12-06 15:47:29.351423] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:46.243 [2024-12-06 15:47:29.351594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:46.243 [2024-12-06 15:47:29.351617] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:46.243 [2024-12-06 15:47:29.351636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.243 [2024-12-06 15:47:29.404495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:46.243 BaseBdev1 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.243 [ 00:20:46.243 { 00:20:46.243 "name": "BaseBdev1", 00:20:46.243 "aliases": [ 00:20:46.243 "b11166f8-c18e-4c00-a647-b894970e5fd8" 00:20:46.243 ], 00:20:46.243 "product_name": "Malloc disk", 00:20:46.243 "block_size": 4096, 00:20:46.243 "num_blocks": 8192, 00:20:46.243 "uuid": "b11166f8-c18e-4c00-a647-b894970e5fd8", 00:20:46.243 "md_size": 32, 00:20:46.243 "md_interleave": false, 00:20:46.243 "dif_type": 0, 00:20:46.243 "assigned_rate_limits": { 00:20:46.243 "rw_ios_per_sec": 0, 00:20:46.243 "rw_mbytes_per_sec": 0, 00:20:46.243 "r_mbytes_per_sec": 0, 00:20:46.243 "w_mbytes_per_sec": 0 00:20:46.243 }, 00:20:46.243 "claimed": true, 00:20:46.243 "claim_type": "exclusive_write", 00:20:46.243 "zoned": false, 00:20:46.243 "supported_io_types": { 00:20:46.243 "read": true, 00:20:46.243 "write": true, 00:20:46.243 "unmap": true, 00:20:46.243 "flush": true, 00:20:46.243 "reset": true, 00:20:46.243 "nvme_admin": false, 00:20:46.243 "nvme_io": false, 00:20:46.243 "nvme_io_md": false, 00:20:46.243 "write_zeroes": true, 00:20:46.243 "zcopy": true, 00:20:46.243 "get_zone_info": false, 00:20:46.243 "zone_management": false, 00:20:46.243 "zone_append": false, 00:20:46.243 "compare": false, 00:20:46.243 "compare_and_write": false, 00:20:46.243 "abort": true, 00:20:46.243 "seek_hole": false, 00:20:46.243 "seek_data": false, 00:20:46.243 "copy": true, 00:20:46.243 "nvme_iov_md": false 00:20:46.243 }, 00:20:46.243 "memory_domains": [ 00:20:46.243 { 00:20:46.243 "dma_device_id": "system", 00:20:46.243 "dma_device_type": 1 00:20:46.243 }, 00:20:46.243 { 00:20:46.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.243 "dma_device_type": 2 00:20:46.243 } 00:20:46.243 ], 00:20:46.243 "driver_specific": {} 00:20:46.243 } 00:20:46.243 ] 00:20:46.243 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.244 "name": "Existed_Raid", 00:20:46.244 "uuid": "8b7af798-fce2-4bb4-8f34-7f7fa5dc2a6f", 00:20:46.244 "strip_size_kb": 0, 00:20:46.244 "state": "configuring", 00:20:46.244 "raid_level": "raid1", 00:20:46.244 "superblock": true, 00:20:46.244 "num_base_bdevs": 2, 00:20:46.244 "num_base_bdevs_discovered": 1, 00:20:46.244 "num_base_bdevs_operational": 2, 00:20:46.244 "base_bdevs_list": [ 00:20:46.244 { 00:20:46.244 "name": "BaseBdev1", 00:20:46.244 "uuid": "b11166f8-c18e-4c00-a647-b894970e5fd8", 00:20:46.244 "is_configured": true, 00:20:46.244 "data_offset": 256, 00:20:46.244 "data_size": 7936 00:20:46.244 }, 00:20:46.244 { 00:20:46.244 "name": "BaseBdev2", 00:20:46.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.244 "is_configured": false, 00:20:46.244 "data_offset": 0, 00:20:46.244 "data_size": 0 00:20:46.244 } 00:20:46.244 ] 00:20:46.244 }' 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.244 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.813 [2024-12-06 15:47:29.808006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:46.813 [2024-12-06 15:47:29.808197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.813 [2024-12-06 15:47:29.816045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:46.813 [2024-12-06 15:47:29.818634] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:46.813 [2024-12-06 15:47:29.818792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.813 "name": "Existed_Raid", 00:20:46.813 "uuid": "8cc1f508-3384-4f5b-9454-683456ef8aae", 00:20:46.813 "strip_size_kb": 0, 00:20:46.813 "state": "configuring", 00:20:46.813 "raid_level": "raid1", 00:20:46.813 "superblock": true, 00:20:46.813 "num_base_bdevs": 2, 00:20:46.813 "num_base_bdevs_discovered": 1, 00:20:46.813 "num_base_bdevs_operational": 2, 00:20:46.813 "base_bdevs_list": [ 00:20:46.813 { 00:20:46.813 "name": "BaseBdev1", 00:20:46.813 "uuid": "b11166f8-c18e-4c00-a647-b894970e5fd8", 00:20:46.813 "is_configured": true, 00:20:46.813 "data_offset": 256, 00:20:46.813 "data_size": 7936 00:20:46.813 }, 00:20:46.813 { 00:20:46.813 "name": "BaseBdev2", 00:20:46.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.813 "is_configured": false, 00:20:46.813 "data_offset": 0, 00:20:46.813 "data_size": 0 00:20:46.813 } 00:20:46.813 ] 00:20:46.813 }' 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.813 15:47:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.073 [2024-12-06 15:47:30.220281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:47.073 [2024-12-06 15:47:30.220594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:47.073 [2024-12-06 15:47:30.220618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:47.073 [2024-12-06 15:47:30.220724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:47.073 [2024-12-06 15:47:30.220873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:47.073 [2024-12-06 15:47:30.220889] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:47.073 [2024-12-06 15:47:30.220986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.073 BaseBdev2 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.073 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.073 [ 00:20:47.073 { 00:20:47.073 "name": "BaseBdev2", 00:20:47.073 "aliases": [ 00:20:47.073 "4ff76b18-18e2-4029-98cc-a5f3c57dd6a0" 00:20:47.073 ], 00:20:47.073 "product_name": "Malloc disk", 00:20:47.073 "block_size": 4096, 00:20:47.073 "num_blocks": 8192, 00:20:47.073 "uuid": "4ff76b18-18e2-4029-98cc-a5f3c57dd6a0", 00:20:47.073 "md_size": 32, 00:20:47.073 "md_interleave": false, 00:20:47.073 "dif_type": 0, 00:20:47.073 "assigned_rate_limits": { 00:20:47.073 "rw_ios_per_sec": 0, 00:20:47.073 "rw_mbytes_per_sec": 0, 00:20:47.073 "r_mbytes_per_sec": 0, 00:20:47.073 "w_mbytes_per_sec": 0 00:20:47.073 }, 00:20:47.073 "claimed": true, 00:20:47.073 "claim_type": "exclusive_write", 00:20:47.073 "zoned": false, 00:20:47.073 "supported_io_types": { 00:20:47.073 "read": true, 00:20:47.073 "write": true, 00:20:47.073 "unmap": true, 00:20:47.073 "flush": true, 00:20:47.073 "reset": true, 00:20:47.073 "nvme_admin": false, 00:20:47.073 "nvme_io": false, 00:20:47.073 "nvme_io_md": false, 00:20:47.073 "write_zeroes": true, 00:20:47.073 "zcopy": true, 00:20:47.073 "get_zone_info": false, 00:20:47.073 "zone_management": false, 00:20:47.073 "zone_append": false, 00:20:47.073 "compare": false, 00:20:47.073 "compare_and_write": false, 00:20:47.073 "abort": true, 00:20:47.073 "seek_hole": false, 00:20:47.073 "seek_data": false, 00:20:47.073 "copy": true, 00:20:47.073 "nvme_iov_md": false 00:20:47.073 }, 00:20:47.073 "memory_domains": [ 00:20:47.073 { 00:20:47.073 "dma_device_id": "system", 00:20:47.073 "dma_device_type": 1 00:20:47.073 }, 00:20:47.073 { 00:20:47.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.074 "dma_device_type": 2 00:20:47.074 } 00:20:47.074 ], 00:20:47.074 "driver_specific": {} 00:20:47.074 } 00:20:47.074 ] 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.074 "name": "Existed_Raid", 00:20:47.074 "uuid": "8cc1f508-3384-4f5b-9454-683456ef8aae", 00:20:47.074 "strip_size_kb": 0, 00:20:47.074 "state": "online", 00:20:47.074 "raid_level": "raid1", 00:20:47.074 "superblock": true, 00:20:47.074 "num_base_bdevs": 2, 00:20:47.074 "num_base_bdevs_discovered": 2, 00:20:47.074 "num_base_bdevs_operational": 2, 00:20:47.074 "base_bdevs_list": [ 00:20:47.074 { 00:20:47.074 "name": "BaseBdev1", 00:20:47.074 "uuid": "b11166f8-c18e-4c00-a647-b894970e5fd8", 00:20:47.074 "is_configured": true, 00:20:47.074 "data_offset": 256, 00:20:47.074 "data_size": 7936 00:20:47.074 }, 00:20:47.074 { 00:20:47.074 "name": "BaseBdev2", 00:20:47.074 "uuid": "4ff76b18-18e2-4029-98cc-a5f3c57dd6a0", 00:20:47.074 "is_configured": true, 00:20:47.074 "data_offset": 256, 00:20:47.074 "data_size": 7936 00:20:47.074 } 00:20:47.074 ] 00:20:47.074 }' 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.074 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.648 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:47.648 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:47.648 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:47.648 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:47.648 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:47.648 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:47.648 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:47.648 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:47.648 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.648 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.649 [2024-12-06 15:47:30.688121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:47.649 "name": "Existed_Raid", 00:20:47.649 "aliases": [ 00:20:47.649 "8cc1f508-3384-4f5b-9454-683456ef8aae" 00:20:47.649 ], 00:20:47.649 "product_name": "Raid Volume", 00:20:47.649 "block_size": 4096, 00:20:47.649 "num_blocks": 7936, 00:20:47.649 "uuid": "8cc1f508-3384-4f5b-9454-683456ef8aae", 00:20:47.649 "md_size": 32, 00:20:47.649 "md_interleave": false, 00:20:47.649 "dif_type": 0, 00:20:47.649 "assigned_rate_limits": { 00:20:47.649 "rw_ios_per_sec": 0, 00:20:47.649 "rw_mbytes_per_sec": 0, 00:20:47.649 "r_mbytes_per_sec": 0, 00:20:47.649 "w_mbytes_per_sec": 0 00:20:47.649 }, 00:20:47.649 "claimed": false, 00:20:47.649 "zoned": false, 00:20:47.649 "supported_io_types": { 00:20:47.649 "read": true, 00:20:47.649 "write": true, 00:20:47.649 "unmap": false, 00:20:47.649 "flush": false, 00:20:47.649 "reset": true, 00:20:47.649 "nvme_admin": false, 00:20:47.649 "nvme_io": false, 00:20:47.649 "nvme_io_md": false, 00:20:47.649 "write_zeroes": true, 00:20:47.649 "zcopy": false, 00:20:47.649 "get_zone_info": false, 00:20:47.649 "zone_management": false, 00:20:47.649 "zone_append": false, 00:20:47.649 "compare": false, 00:20:47.649 "compare_and_write": false, 00:20:47.649 "abort": false, 00:20:47.649 "seek_hole": false, 00:20:47.649 "seek_data": false, 00:20:47.649 "copy": false, 00:20:47.649 "nvme_iov_md": false 00:20:47.649 }, 00:20:47.649 "memory_domains": [ 00:20:47.649 { 00:20:47.649 "dma_device_id": "system", 00:20:47.649 "dma_device_type": 1 00:20:47.649 }, 00:20:47.649 { 00:20:47.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.649 "dma_device_type": 2 00:20:47.649 }, 00:20:47.649 { 00:20:47.649 "dma_device_id": "system", 00:20:47.649 "dma_device_type": 1 00:20:47.649 }, 00:20:47.649 { 00:20:47.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.649 "dma_device_type": 2 00:20:47.649 } 00:20:47.649 ], 00:20:47.649 "driver_specific": { 00:20:47.649 "raid": { 00:20:47.649 "uuid": "8cc1f508-3384-4f5b-9454-683456ef8aae", 00:20:47.649 "strip_size_kb": 0, 00:20:47.649 "state": "online", 00:20:47.649 "raid_level": "raid1", 00:20:47.649 "superblock": true, 00:20:47.649 "num_base_bdevs": 2, 00:20:47.649 "num_base_bdevs_discovered": 2, 00:20:47.649 "num_base_bdevs_operational": 2, 00:20:47.649 "base_bdevs_list": [ 00:20:47.649 { 00:20:47.649 "name": "BaseBdev1", 00:20:47.649 "uuid": "b11166f8-c18e-4c00-a647-b894970e5fd8", 00:20:47.649 "is_configured": true, 00:20:47.649 "data_offset": 256, 00:20:47.649 "data_size": 7936 00:20:47.649 }, 00:20:47.649 { 00:20:47.649 "name": "BaseBdev2", 00:20:47.649 "uuid": "4ff76b18-18e2-4029-98cc-a5f3c57dd6a0", 00:20:47.649 "is_configured": true, 00:20:47.649 "data_offset": 256, 00:20:47.649 "data_size": 7936 00:20:47.649 } 00:20:47.649 ] 00:20:47.649 } 00:20:47.649 } 00:20:47.649 }' 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:47.649 BaseBdev2' 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.649 15:47:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.649 [2024-12-06 15:47:30.895517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.907 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.907 "name": "Existed_Raid", 00:20:47.907 "uuid": "8cc1f508-3384-4f5b-9454-683456ef8aae", 00:20:47.907 "strip_size_kb": 0, 00:20:47.907 "state": "online", 00:20:47.907 "raid_level": "raid1", 00:20:47.907 "superblock": true, 00:20:47.907 "num_base_bdevs": 2, 00:20:47.907 "num_base_bdevs_discovered": 1, 00:20:47.907 "num_base_bdevs_operational": 1, 00:20:47.907 "base_bdevs_list": [ 00:20:47.907 { 00:20:47.907 "name": null, 00:20:47.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.907 "is_configured": false, 00:20:47.907 "data_offset": 0, 00:20:47.907 "data_size": 7936 00:20:47.907 }, 00:20:47.907 { 00:20:47.907 "name": "BaseBdev2", 00:20:47.907 "uuid": "4ff76b18-18e2-4029-98cc-a5f3c57dd6a0", 00:20:47.907 "is_configured": true, 00:20:47.908 "data_offset": 256, 00:20:47.908 "data_size": 7936 00:20:47.908 } 00:20:47.908 ] 00:20:47.908 }' 00:20:47.908 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.908 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.165 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:48.165 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:48.165 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.165 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.165 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:48.165 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.165 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.165 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.424 [2024-12-06 15:47:31.461136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:48.424 [2024-12-06 15:47:31.461395] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:48.424 [2024-12-06 15:47:31.571574] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:48.424 [2024-12-06 15:47:31.571639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:48.424 [2024-12-06 15:47:31.571656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87188 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87188 ']' 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87188 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87188 00:20:48.424 killing process with pid 87188 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87188' 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87188 00:20:48.424 [2024-12-06 15:47:31.668302] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:48.424 15:47:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87188 00:20:48.424 [2024-12-06 15:47:31.686220] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:49.804 15:47:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:20:49.804 00:20:49.804 real 0m4.844s 00:20:49.804 user 0m6.631s 00:20:49.804 sys 0m1.034s 00:20:49.804 ************************************ 00:20:49.804 END TEST raid_state_function_test_sb_md_separate 00:20:49.804 ************************************ 00:20:49.804 15:47:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.804 15:47:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.804 15:47:32 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:20:49.804 15:47:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:49.804 15:47:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.804 15:47:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:49.804 ************************************ 00:20:49.804 START TEST raid_superblock_test_md_separate 00:20:49.804 ************************************ 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87429 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87429 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87429 ']' 00:20:49.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.804 15:47:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.804 [2024-12-06 15:47:33.085847] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:20:49.804 [2024-12-06 15:47:33.085989] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87429 ] 00:20:50.064 [2024-12-06 15:47:33.273387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.324 [2024-12-06 15:47:33.405579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.583 [2024-12-06 15:47:33.634931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.583 [2024-12-06 15:47:33.635002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.844 malloc1 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.844 [2024-12-06 15:47:33.989199] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:50.844 [2024-12-06 15:47:33.989424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.844 [2024-12-06 15:47:33.989467] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:50.844 [2024-12-06 15:47:33.989481] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.844 [2024-12-06 15:47:33.992042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.844 [2024-12-06 15:47:33.992082] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:50.844 pt1 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.844 15:47:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.844 malloc2 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.844 [2024-12-06 15:47:34.056110] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:50.844 [2024-12-06 15:47:34.056170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.844 [2024-12-06 15:47:34.056199] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:50.844 [2024-12-06 15:47:34.056211] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.844 [2024-12-06 15:47:34.058685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.844 [2024-12-06 15:47:34.058842] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:50.844 pt2 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.844 [2024-12-06 15:47:34.068120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:50.844 [2024-12-06 15:47:34.070509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:50.844 [2024-12-06 15:47:34.070715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:50.844 [2024-12-06 15:47:34.070731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:50.844 [2024-12-06 15:47:34.070825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:50.844 [2024-12-06 15:47:34.070961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:50.844 [2024-12-06 15:47:34.070975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:50.844 [2024-12-06 15:47:34.071074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.844 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.844 "name": "raid_bdev1", 00:20:50.844 "uuid": "e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2", 00:20:50.844 "strip_size_kb": 0, 00:20:50.844 "state": "online", 00:20:50.844 "raid_level": "raid1", 00:20:50.844 "superblock": true, 00:20:50.844 "num_base_bdevs": 2, 00:20:50.844 "num_base_bdevs_discovered": 2, 00:20:50.844 "num_base_bdevs_operational": 2, 00:20:50.844 "base_bdevs_list": [ 00:20:50.844 { 00:20:50.844 "name": "pt1", 00:20:50.844 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:50.844 "is_configured": true, 00:20:50.844 "data_offset": 256, 00:20:50.844 "data_size": 7936 00:20:50.844 }, 00:20:50.844 { 00:20:50.844 "name": "pt2", 00:20:50.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:50.844 "is_configured": true, 00:20:50.844 "data_offset": 256, 00:20:50.844 "data_size": 7936 00:20:50.844 } 00:20:50.845 ] 00:20:50.845 }' 00:20:50.845 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.845 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.418 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:51.418 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:51.418 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:51.418 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:51.418 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:51.418 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:51.418 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:51.418 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.418 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.418 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:51.418 [2024-12-06 15:47:34.475954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:51.418 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.418 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:51.418 "name": "raid_bdev1", 00:20:51.418 "aliases": [ 00:20:51.418 "e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2" 00:20:51.418 ], 00:20:51.418 "product_name": "Raid Volume", 00:20:51.418 "block_size": 4096, 00:20:51.418 "num_blocks": 7936, 00:20:51.418 "uuid": "e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2", 00:20:51.418 "md_size": 32, 00:20:51.418 "md_interleave": false, 00:20:51.418 "dif_type": 0, 00:20:51.418 "assigned_rate_limits": { 00:20:51.418 "rw_ios_per_sec": 0, 00:20:51.418 "rw_mbytes_per_sec": 0, 00:20:51.418 "r_mbytes_per_sec": 0, 00:20:51.418 "w_mbytes_per_sec": 0 00:20:51.418 }, 00:20:51.418 "claimed": false, 00:20:51.418 "zoned": false, 00:20:51.418 "supported_io_types": { 00:20:51.418 "read": true, 00:20:51.418 "write": true, 00:20:51.418 "unmap": false, 00:20:51.418 "flush": false, 00:20:51.418 "reset": true, 00:20:51.418 "nvme_admin": false, 00:20:51.418 "nvme_io": false, 00:20:51.418 "nvme_io_md": false, 00:20:51.418 "write_zeroes": true, 00:20:51.418 "zcopy": false, 00:20:51.418 "get_zone_info": false, 00:20:51.418 "zone_management": false, 00:20:51.418 "zone_append": false, 00:20:51.418 "compare": false, 00:20:51.418 "compare_and_write": false, 00:20:51.418 "abort": false, 00:20:51.418 "seek_hole": false, 00:20:51.418 "seek_data": false, 00:20:51.418 "copy": false, 00:20:51.418 "nvme_iov_md": false 00:20:51.418 }, 00:20:51.418 "memory_domains": [ 00:20:51.418 { 00:20:51.418 "dma_device_id": "system", 00:20:51.418 "dma_device_type": 1 00:20:51.418 }, 00:20:51.419 { 00:20:51.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.419 "dma_device_type": 2 00:20:51.419 }, 00:20:51.419 { 00:20:51.419 "dma_device_id": "system", 00:20:51.419 "dma_device_type": 1 00:20:51.419 }, 00:20:51.419 { 00:20:51.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.419 "dma_device_type": 2 00:20:51.419 } 00:20:51.419 ], 00:20:51.419 "driver_specific": { 00:20:51.419 "raid": { 00:20:51.419 "uuid": "e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2", 00:20:51.419 "strip_size_kb": 0, 00:20:51.419 "state": "online", 00:20:51.419 "raid_level": "raid1", 00:20:51.419 "superblock": true, 00:20:51.419 "num_base_bdevs": 2, 00:20:51.419 "num_base_bdevs_discovered": 2, 00:20:51.419 "num_base_bdevs_operational": 2, 00:20:51.419 "base_bdevs_list": [ 00:20:51.419 { 00:20:51.419 "name": "pt1", 00:20:51.419 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:51.419 "is_configured": true, 00:20:51.419 "data_offset": 256, 00:20:51.419 "data_size": 7936 00:20:51.419 }, 00:20:51.419 { 00:20:51.419 "name": "pt2", 00:20:51.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:51.419 "is_configured": true, 00:20:51.419 "data_offset": 256, 00:20:51.419 "data_size": 7936 00:20:51.419 } 00:20:51.419 ] 00:20:51.419 } 00:20:51.419 } 00:20:51.419 }' 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:51.419 pt2' 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.419 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.419 [2024-12-06 15:47:34.703519] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2 ']' 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.679 [2024-12-06 15:47:34.739210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:51.679 [2024-12-06 15:47:34.739235] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:51.679 [2024-12-06 15:47:34.739323] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:51.679 [2024-12-06 15:47:34.739387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:51.679 [2024-12-06 15:47:34.739402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.679 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.680 [2024-12-06 15:47:34.871036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:51.680 [2024-12-06 15:47:34.873588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:51.680 [2024-12-06 15:47:34.873672] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:51.680 [2024-12-06 15:47:34.873742] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:51.680 [2024-12-06 15:47:34.873761] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:51.680 [2024-12-06 15:47:34.873774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:51.680 request: 00:20:51.680 { 00:20:51.680 "name": "raid_bdev1", 00:20:51.680 "raid_level": "raid1", 00:20:51.680 "base_bdevs": [ 00:20:51.680 "malloc1", 00:20:51.680 "malloc2" 00:20:51.680 ], 00:20:51.680 "superblock": false, 00:20:51.680 "method": "bdev_raid_create", 00:20:51.680 "req_id": 1 00:20:51.680 } 00:20:51.680 Got JSON-RPC error response 00:20:51.680 response: 00:20:51.680 { 00:20:51.680 "code": -17, 00:20:51.680 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:51.680 } 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.680 [2024-12-06 15:47:34.934931] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:51.680 [2024-12-06 15:47:34.934986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.680 [2024-12-06 15:47:34.935005] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:51.680 [2024-12-06 15:47:34.935020] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.680 [2024-12-06 15:47:34.937557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.680 [2024-12-06 15:47:34.937596] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:51.680 [2024-12-06 15:47:34.937645] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:51.680 [2024-12-06 15:47:34.937724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:51.680 pt1 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.680 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.939 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.939 "name": "raid_bdev1", 00:20:51.939 "uuid": "e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2", 00:20:51.939 "strip_size_kb": 0, 00:20:51.939 "state": "configuring", 00:20:51.939 "raid_level": "raid1", 00:20:51.939 "superblock": true, 00:20:51.939 "num_base_bdevs": 2, 00:20:51.939 "num_base_bdevs_discovered": 1, 00:20:51.939 "num_base_bdevs_operational": 2, 00:20:51.939 "base_bdevs_list": [ 00:20:51.939 { 00:20:51.939 "name": "pt1", 00:20:51.939 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:51.939 "is_configured": true, 00:20:51.939 "data_offset": 256, 00:20:51.939 "data_size": 7936 00:20:51.939 }, 00:20:51.939 { 00:20:51.939 "name": null, 00:20:51.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:51.939 "is_configured": false, 00:20:51.939 "data_offset": 256, 00:20:51.939 "data_size": 7936 00:20:51.939 } 00:20:51.939 ] 00:20:51.939 }' 00:20:51.939 15:47:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.939 15:47:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.197 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:52.197 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:52.197 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:52.197 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:52.197 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.197 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.197 [2024-12-06 15:47:35.298411] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:52.197 [2024-12-06 15:47:35.298483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.197 [2024-12-06 15:47:35.298521] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:52.197 [2024-12-06 15:47:35.298538] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.197 [2024-12-06 15:47:35.298760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.197 [2024-12-06 15:47:35.298781] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:52.198 [2024-12-06 15:47:35.298825] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:52.198 [2024-12-06 15:47:35.298850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:52.198 [2024-12-06 15:47:35.298974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:52.198 [2024-12-06 15:47:35.298988] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:52.198 [2024-12-06 15:47:35.299060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:52.198 [2024-12-06 15:47:35.299181] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:52.198 [2024-12-06 15:47:35.299191] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:52.198 [2024-12-06 15:47:35.299302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.198 pt2 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.198 "name": "raid_bdev1", 00:20:52.198 "uuid": "e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2", 00:20:52.198 "strip_size_kb": 0, 00:20:52.198 "state": "online", 00:20:52.198 "raid_level": "raid1", 00:20:52.198 "superblock": true, 00:20:52.198 "num_base_bdevs": 2, 00:20:52.198 "num_base_bdevs_discovered": 2, 00:20:52.198 "num_base_bdevs_operational": 2, 00:20:52.198 "base_bdevs_list": [ 00:20:52.198 { 00:20:52.198 "name": "pt1", 00:20:52.198 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:52.198 "is_configured": true, 00:20:52.198 "data_offset": 256, 00:20:52.198 "data_size": 7936 00:20:52.198 }, 00:20:52.198 { 00:20:52.198 "name": "pt2", 00:20:52.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:52.198 "is_configured": true, 00:20:52.198 "data_offset": 256, 00:20:52.198 "data_size": 7936 00:20:52.198 } 00:20:52.198 ] 00:20:52.198 }' 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.198 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.457 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:52.457 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:52.457 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:52.457 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:52.457 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:52.457 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:52.457 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:52.457 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:52.457 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.457 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.457 [2024-12-06 15:47:35.686141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:52.457 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.457 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:52.457 "name": "raid_bdev1", 00:20:52.457 "aliases": [ 00:20:52.457 "e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2" 00:20:52.457 ], 00:20:52.457 "product_name": "Raid Volume", 00:20:52.457 "block_size": 4096, 00:20:52.457 "num_blocks": 7936, 00:20:52.457 "uuid": "e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2", 00:20:52.457 "md_size": 32, 00:20:52.457 "md_interleave": false, 00:20:52.457 "dif_type": 0, 00:20:52.457 "assigned_rate_limits": { 00:20:52.457 "rw_ios_per_sec": 0, 00:20:52.457 "rw_mbytes_per_sec": 0, 00:20:52.457 "r_mbytes_per_sec": 0, 00:20:52.457 "w_mbytes_per_sec": 0 00:20:52.457 }, 00:20:52.457 "claimed": false, 00:20:52.457 "zoned": false, 00:20:52.457 "supported_io_types": { 00:20:52.457 "read": true, 00:20:52.457 "write": true, 00:20:52.457 "unmap": false, 00:20:52.457 "flush": false, 00:20:52.457 "reset": true, 00:20:52.457 "nvme_admin": false, 00:20:52.457 "nvme_io": false, 00:20:52.457 "nvme_io_md": false, 00:20:52.457 "write_zeroes": true, 00:20:52.457 "zcopy": false, 00:20:52.457 "get_zone_info": false, 00:20:52.457 "zone_management": false, 00:20:52.457 "zone_append": false, 00:20:52.457 "compare": false, 00:20:52.457 "compare_and_write": false, 00:20:52.457 "abort": false, 00:20:52.457 "seek_hole": false, 00:20:52.457 "seek_data": false, 00:20:52.457 "copy": false, 00:20:52.457 "nvme_iov_md": false 00:20:52.457 }, 00:20:52.457 "memory_domains": [ 00:20:52.457 { 00:20:52.457 "dma_device_id": "system", 00:20:52.457 "dma_device_type": 1 00:20:52.457 }, 00:20:52.457 { 00:20:52.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:52.457 "dma_device_type": 2 00:20:52.457 }, 00:20:52.457 { 00:20:52.457 "dma_device_id": "system", 00:20:52.457 "dma_device_type": 1 00:20:52.457 }, 00:20:52.457 { 00:20:52.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:52.457 "dma_device_type": 2 00:20:52.457 } 00:20:52.457 ], 00:20:52.457 "driver_specific": { 00:20:52.457 "raid": { 00:20:52.457 "uuid": "e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2", 00:20:52.457 "strip_size_kb": 0, 00:20:52.457 "state": "online", 00:20:52.457 "raid_level": "raid1", 00:20:52.457 "superblock": true, 00:20:52.457 "num_base_bdevs": 2, 00:20:52.457 "num_base_bdevs_discovered": 2, 00:20:52.457 "num_base_bdevs_operational": 2, 00:20:52.457 "base_bdevs_list": [ 00:20:52.457 { 00:20:52.457 "name": "pt1", 00:20:52.457 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:52.457 "is_configured": true, 00:20:52.457 "data_offset": 256, 00:20:52.457 "data_size": 7936 00:20:52.457 }, 00:20:52.457 { 00:20:52.457 "name": "pt2", 00:20:52.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:52.457 "is_configured": true, 00:20:52.457 "data_offset": 256, 00:20:52.457 "data_size": 7936 00:20:52.457 } 00:20:52.457 ] 00:20:52.457 } 00:20:52.457 } 00:20:52.457 }' 00:20:52.457 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:52.717 pt2' 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.717 [2024-12-06 15:47:35.893980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2 '!=' e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2 ']' 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.717 [2024-12-06 15:47:35.929707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.717 "name": "raid_bdev1", 00:20:52.717 "uuid": "e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2", 00:20:52.717 "strip_size_kb": 0, 00:20:52.717 "state": "online", 00:20:52.717 "raid_level": "raid1", 00:20:52.717 "superblock": true, 00:20:52.717 "num_base_bdevs": 2, 00:20:52.717 "num_base_bdevs_discovered": 1, 00:20:52.717 "num_base_bdevs_operational": 1, 00:20:52.717 "base_bdevs_list": [ 00:20:52.717 { 00:20:52.717 "name": null, 00:20:52.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.717 "is_configured": false, 00:20:52.717 "data_offset": 0, 00:20:52.717 "data_size": 7936 00:20:52.717 }, 00:20:52.717 { 00:20:52.717 "name": "pt2", 00:20:52.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:52.717 "is_configured": true, 00:20:52.717 "data_offset": 256, 00:20:52.717 "data_size": 7936 00:20:52.717 } 00:20:52.717 ] 00:20:52.717 }' 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.717 15:47:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.320 [2024-12-06 15:47:36.369055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:53.320 [2024-12-06 15:47:36.369191] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:53.320 [2024-12-06 15:47:36.369281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:53.320 [2024-12-06 15:47:36.369334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:53.320 [2024-12-06 15:47:36.369349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:53.320 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.321 [2024-12-06 15:47:36.440955] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:53.321 [2024-12-06 15:47:36.441013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.321 [2024-12-06 15:47:36.441049] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:53.321 [2024-12-06 15:47:36.441064] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.321 [2024-12-06 15:47:36.443671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.321 [2024-12-06 15:47:36.443808] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:53.321 [2024-12-06 15:47:36.443889] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:53.321 [2024-12-06 15:47:36.443955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:53.321 [2024-12-06 15:47:36.444068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:53.321 [2024-12-06 15:47:36.444084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:53.321 [2024-12-06 15:47:36.444168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:53.321 [2024-12-06 15:47:36.444298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:53.321 [2024-12-06 15:47:36.444308] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:53.321 [2024-12-06 15:47:36.444402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.321 pt2 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.321 "name": "raid_bdev1", 00:20:53.321 "uuid": "e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2", 00:20:53.321 "strip_size_kb": 0, 00:20:53.321 "state": "online", 00:20:53.321 "raid_level": "raid1", 00:20:53.321 "superblock": true, 00:20:53.321 "num_base_bdevs": 2, 00:20:53.321 "num_base_bdevs_discovered": 1, 00:20:53.321 "num_base_bdevs_operational": 1, 00:20:53.321 "base_bdevs_list": [ 00:20:53.321 { 00:20:53.321 "name": null, 00:20:53.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.321 "is_configured": false, 00:20:53.321 "data_offset": 256, 00:20:53.321 "data_size": 7936 00:20:53.321 }, 00:20:53.321 { 00:20:53.321 "name": "pt2", 00:20:53.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:53.321 "is_configured": true, 00:20:53.321 "data_offset": 256, 00:20:53.321 "data_size": 7936 00:20:53.321 } 00:20:53.321 ] 00:20:53.321 }' 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.321 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.581 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:53.581 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.581 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.841 [2024-12-06 15:47:36.876341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:53.841 [2024-12-06 15:47:36.876478] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:53.841 [2024-12-06 15:47:36.876671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:53.841 [2024-12-06 15:47:36.876809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:53.841 [2024-12-06 15:47:36.876931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:53.841 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.841 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:53.841 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.841 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.841 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.841 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.841 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:53.841 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:53.841 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:53.841 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:53.841 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.841 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.841 [2024-12-06 15:47:36.932296] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:53.841 [2024-12-06 15:47:36.932457] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.841 [2024-12-06 15:47:36.932492] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:53.841 [2024-12-06 15:47:36.932516] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.841 [2024-12-06 15:47:36.935026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.841 [2024-12-06 15:47:36.935066] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:53.841 [2024-12-06 15:47:36.935126] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:53.841 [2024-12-06 15:47:36.935182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:53.841 [2024-12-06 15:47:36.935325] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:53.841 [2024-12-06 15:47:36.935337] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:53.841 [2024-12-06 15:47:36.935356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:53.841 [2024-12-06 15:47:36.935442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:53.841 [2024-12-06 15:47:36.935530] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:53.841 [2024-12-06 15:47:36.935540] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:53.842 [2024-12-06 15:47:36.935607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:53.842 [2024-12-06 15:47:36.935732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:53.842 [2024-12-06 15:47:36.935744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:53.842 [2024-12-06 15:47:36.935850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.842 pt1 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.842 "name": "raid_bdev1", 00:20:53.842 "uuid": "e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2", 00:20:53.842 "strip_size_kb": 0, 00:20:53.842 "state": "online", 00:20:53.842 "raid_level": "raid1", 00:20:53.842 "superblock": true, 00:20:53.842 "num_base_bdevs": 2, 00:20:53.842 "num_base_bdevs_discovered": 1, 00:20:53.842 "num_base_bdevs_operational": 1, 00:20:53.842 "base_bdevs_list": [ 00:20:53.842 { 00:20:53.842 "name": null, 00:20:53.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.842 "is_configured": false, 00:20:53.842 "data_offset": 256, 00:20:53.842 "data_size": 7936 00:20:53.842 }, 00:20:53.842 { 00:20:53.842 "name": "pt2", 00:20:53.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:53.842 "is_configured": true, 00:20:53.842 "data_offset": 256, 00:20:53.842 "data_size": 7936 00:20:53.842 } 00:20:53.842 ] 00:20:53.842 }' 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.842 15:47:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:54.102 15:47:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:54.102 15:47:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:54.102 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.102 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:54.102 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.102 15:47:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:54.102 15:47:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:54.102 15:47:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:54.102 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.102 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:54.102 [2024-12-06 15:47:37.379868] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:54.361 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.361 15:47:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2 '!=' e44ef2b1-57a7-47c8-ad4e-ca63022ef0f2 ']' 00:20:54.361 15:47:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87429 00:20:54.361 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87429 ']' 00:20:54.361 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87429 00:20:54.361 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:54.361 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.361 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87429 00:20:54.361 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:54.361 killing process with pid 87429 00:20:54.361 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:54.361 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87429' 00:20:54.361 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87429 00:20:54.361 [2024-12-06 15:47:37.462835] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:54.361 [2024-12-06 15:47:37.462916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:54.362 [2024-12-06 15:47:37.462966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:54.362 15:47:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87429 00:20:54.362 [2024-12-06 15:47:37.462988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:54.621 [2024-12-06 15:47:37.699234] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:56.000 15:47:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:20:56.000 00:20:56.000 real 0m5.928s 00:20:56.000 user 0m8.720s 00:20:56.000 sys 0m1.275s 00:20:56.000 ************************************ 00:20:56.000 END TEST raid_superblock_test_md_separate 00:20:56.000 ************************************ 00:20:56.000 15:47:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.000 15:47:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.000 15:47:38 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:20:56.000 15:47:38 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:20:56.000 15:47:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:56.000 15:47:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:56.000 15:47:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:56.000 ************************************ 00:20:56.000 START TEST raid_rebuild_test_sb_md_separate 00:20:56.000 ************************************ 00:20:56.000 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:20:56.000 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:56.000 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:56.000 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:56.000 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:56.001 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:56.001 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:56.001 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:56.001 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:56.001 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:56.001 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:56.001 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:56.001 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:56.001 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:56.001 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:56.001 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:56.001 15:47:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87758 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87758 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87758 ']' 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.001 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.001 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:56.001 Zero copy mechanism will not be used. 00:20:56.001 [2024-12-06 15:47:39.102698] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:20:56.001 [2024-12-06 15:47:39.102856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87758 ] 00:20:56.001 [2024-12-06 15:47:39.285345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.260 [2024-12-06 15:47:39.421919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.519 [2024-12-06 15:47:39.650863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:56.519 [2024-12-06 15:47:39.650936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:56.778 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.778 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:20:56.778 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:56.778 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:20:56.778 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.778 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.778 BaseBdev1_malloc 00:20:56.778 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.778 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:56.778 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.778 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.778 [2024-12-06 15:47:39.994114] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:56.778 [2024-12-06 15:47:39.994327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.778 [2024-12-06 15:47:39.994393] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:56.778 [2024-12-06 15:47:39.994522] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.778 [2024-12-06 15:47:39.997042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.778 [2024-12-06 15:47:39.997210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:56.778 BaseBdev1 00:20:56.778 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.778 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:56.778 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:20:56.778 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.778 15:47:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.778 BaseBdev2_malloc 00:20:56.778 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.778 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:56.778 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.778 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.778 [2024-12-06 15:47:40.059128] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:56.778 [2024-12-06 15:47:40.059320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.778 [2024-12-06 15:47:40.059352] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:56.778 [2024-12-06 15:47:40.059370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.778 [2024-12-06 15:47:40.061897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.778 [2024-12-06 15:47:40.061939] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:56.778 BaseBdev2 00:20:56.778 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.778 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:20:56.779 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.779 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.038 spare_malloc 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.038 spare_delay 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.038 [2024-12-06 15:47:40.147064] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:57.038 [2024-12-06 15:47:40.147246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.038 [2024-12-06 15:47:40.147304] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:57.038 [2024-12-06 15:47:40.147383] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.038 [2024-12-06 15:47:40.150075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.038 [2024-12-06 15:47:40.150221] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:57.038 spare 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.038 [2024-12-06 15:47:40.159094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:57.038 [2024-12-06 15:47:40.161443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:57.038 [2024-12-06 15:47:40.161651] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:57.038 [2024-12-06 15:47:40.161669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:57.038 [2024-12-06 15:47:40.161759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:57.038 [2024-12-06 15:47:40.161908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:57.038 [2024-12-06 15:47:40.161920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:57.038 [2024-12-06 15:47:40.162034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:57.038 "name": "raid_bdev1", 00:20:57.038 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:20:57.038 "strip_size_kb": 0, 00:20:57.038 "state": "online", 00:20:57.038 "raid_level": "raid1", 00:20:57.038 "superblock": true, 00:20:57.038 "num_base_bdevs": 2, 00:20:57.038 "num_base_bdevs_discovered": 2, 00:20:57.038 "num_base_bdevs_operational": 2, 00:20:57.038 "base_bdevs_list": [ 00:20:57.038 { 00:20:57.038 "name": "BaseBdev1", 00:20:57.038 "uuid": "481f1a2b-c8f0-58a4-88e3-5e77a25f0768", 00:20:57.038 "is_configured": true, 00:20:57.038 "data_offset": 256, 00:20:57.038 "data_size": 7936 00:20:57.038 }, 00:20:57.038 { 00:20:57.038 "name": "BaseBdev2", 00:20:57.038 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:20:57.038 "is_configured": true, 00:20:57.038 "data_offset": 256, 00:20:57.038 "data_size": 7936 00:20:57.038 } 00:20:57.038 ] 00:20:57.038 }' 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:57.038 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.607 [2024-12-06 15:47:40.607002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:57.607 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:57.607 [2024-12-06 15:47:40.878551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:57.607 /dev/nbd0 00:20:57.866 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:57.866 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:57.866 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:57.866 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:20:57.866 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:57.866 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:57.866 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:57.866 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:20:57.866 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:57.866 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:57.866 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:57.866 1+0 records in 00:20:57.866 1+0 records out 00:20:57.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452515 s, 9.1 MB/s 00:20:57.866 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.866 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:20:57.866 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.867 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:57.867 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:20:57.867 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:57.867 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:57.867 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:57.867 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:57.867 15:47:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:58.434 7936+0 records in 00:20:58.434 7936+0 records out 00:20:58.434 32505856 bytes (33 MB, 31 MiB) copied, 0.704605 s, 46.1 MB/s 00:20:58.434 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:58.434 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:58.434 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:58.434 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:58.434 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:58.434 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:58.434 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:58.693 [2024-12-06 15:47:41.880066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.693 [2024-12-06 15:47:41.896623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.693 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.693 "name": "raid_bdev1", 00:20:58.693 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:20:58.693 "strip_size_kb": 0, 00:20:58.693 "state": "online", 00:20:58.693 "raid_level": "raid1", 00:20:58.693 "superblock": true, 00:20:58.693 "num_base_bdevs": 2, 00:20:58.693 "num_base_bdevs_discovered": 1, 00:20:58.693 "num_base_bdevs_operational": 1, 00:20:58.693 "base_bdevs_list": [ 00:20:58.693 { 00:20:58.693 "name": null, 00:20:58.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.693 "is_configured": false, 00:20:58.693 "data_offset": 0, 00:20:58.693 "data_size": 7936 00:20:58.693 }, 00:20:58.693 { 00:20:58.693 "name": "BaseBdev2", 00:20:58.693 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:20:58.693 "is_configured": true, 00:20:58.694 "data_offset": 256, 00:20:58.694 "data_size": 7936 00:20:58.694 } 00:20:58.694 ] 00:20:58.694 }' 00:20:58.694 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.694 15:47:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:59.262 15:47:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:59.262 15:47:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.262 15:47:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:59.262 [2024-12-06 15:47:42.352005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:59.262 [2024-12-06 15:47:42.368066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:59.262 15:47:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.262 15:47:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:59.262 [2024-12-06 15:47:42.370708] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:00.200 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.200 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.200 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:00.200 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:00.200 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.200 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.200 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.200 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.200 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:00.200 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.200 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.200 "name": "raid_bdev1", 00:21:00.200 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:00.200 "strip_size_kb": 0, 00:21:00.200 "state": "online", 00:21:00.200 "raid_level": "raid1", 00:21:00.200 "superblock": true, 00:21:00.200 "num_base_bdevs": 2, 00:21:00.200 "num_base_bdevs_discovered": 2, 00:21:00.200 "num_base_bdevs_operational": 2, 00:21:00.200 "process": { 00:21:00.200 "type": "rebuild", 00:21:00.200 "target": "spare", 00:21:00.200 "progress": { 00:21:00.200 "blocks": 2560, 00:21:00.200 "percent": 32 00:21:00.200 } 00:21:00.200 }, 00:21:00.200 "base_bdevs_list": [ 00:21:00.200 { 00:21:00.200 "name": "spare", 00:21:00.200 "uuid": "1730310a-b283-5eaa-9a29-c32867e184b7", 00:21:00.200 "is_configured": true, 00:21:00.200 "data_offset": 256, 00:21:00.200 "data_size": 7936 00:21:00.200 }, 00:21:00.200 { 00:21:00.200 "name": "BaseBdev2", 00:21:00.200 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:00.200 "is_configured": true, 00:21:00.200 "data_offset": 256, 00:21:00.200 "data_size": 7936 00:21:00.200 } 00:21:00.200 ] 00:21:00.200 }' 00:21:00.200 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.200 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.200 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:00.460 [2024-12-06 15:47:43.502125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:00.460 [2024-12-06 15:47:43.579686] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:00.460 [2024-12-06 15:47:43.579917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.460 [2024-12-06 15:47:43.579943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:00.460 [2024-12-06 15:47:43.579961] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.460 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.460 "name": "raid_bdev1", 00:21:00.460 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:00.460 "strip_size_kb": 0, 00:21:00.460 "state": "online", 00:21:00.460 "raid_level": "raid1", 00:21:00.460 "superblock": true, 00:21:00.460 "num_base_bdevs": 2, 00:21:00.460 "num_base_bdevs_discovered": 1, 00:21:00.460 "num_base_bdevs_operational": 1, 00:21:00.460 "base_bdevs_list": [ 00:21:00.460 { 00:21:00.460 "name": null, 00:21:00.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.460 "is_configured": false, 00:21:00.461 "data_offset": 0, 00:21:00.461 "data_size": 7936 00:21:00.461 }, 00:21:00.461 { 00:21:00.461 "name": "BaseBdev2", 00:21:00.461 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:00.461 "is_configured": true, 00:21:00.461 "data_offset": 256, 00:21:00.461 "data_size": 7936 00:21:00.461 } 00:21:00.461 ] 00:21:00.461 }' 00:21:00.461 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.461 15:47:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.030 "name": "raid_bdev1", 00:21:01.030 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:01.030 "strip_size_kb": 0, 00:21:01.030 "state": "online", 00:21:01.030 "raid_level": "raid1", 00:21:01.030 "superblock": true, 00:21:01.030 "num_base_bdevs": 2, 00:21:01.030 "num_base_bdevs_discovered": 1, 00:21:01.030 "num_base_bdevs_operational": 1, 00:21:01.030 "base_bdevs_list": [ 00:21:01.030 { 00:21:01.030 "name": null, 00:21:01.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.030 "is_configured": false, 00:21:01.030 "data_offset": 0, 00:21:01.030 "data_size": 7936 00:21:01.030 }, 00:21:01.030 { 00:21:01.030 "name": "BaseBdev2", 00:21:01.030 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:01.030 "is_configured": true, 00:21:01.030 "data_offset": 256, 00:21:01.030 "data_size": 7936 00:21:01.030 } 00:21:01.030 ] 00:21:01.030 }' 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:01.030 [2024-12-06 15:47:44.140963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:01.030 [2024-12-06 15:47:44.155474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.030 15:47:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:01.030 [2024-12-06 15:47:44.157896] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:01.970 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:01.970 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.970 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:01.970 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:01.970 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.970 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.970 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.970 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:01.970 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.970 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.970 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.970 "name": "raid_bdev1", 00:21:01.970 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:01.970 "strip_size_kb": 0, 00:21:01.970 "state": "online", 00:21:01.970 "raid_level": "raid1", 00:21:01.970 "superblock": true, 00:21:01.970 "num_base_bdevs": 2, 00:21:01.970 "num_base_bdevs_discovered": 2, 00:21:01.970 "num_base_bdevs_operational": 2, 00:21:01.970 "process": { 00:21:01.970 "type": "rebuild", 00:21:01.970 "target": "spare", 00:21:01.970 "progress": { 00:21:01.970 "blocks": 2560, 00:21:01.970 "percent": 32 00:21:01.970 } 00:21:01.970 }, 00:21:01.971 "base_bdevs_list": [ 00:21:01.971 { 00:21:01.971 "name": "spare", 00:21:01.971 "uuid": "1730310a-b283-5eaa-9a29-c32867e184b7", 00:21:01.971 "is_configured": true, 00:21:01.971 "data_offset": 256, 00:21:01.971 "data_size": 7936 00:21:01.971 }, 00:21:01.971 { 00:21:01.971 "name": "BaseBdev2", 00:21:01.971 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:01.971 "is_configured": true, 00:21:01.971 "data_offset": 256, 00:21:01.971 "data_size": 7936 00:21:01.971 } 00:21:01.971 ] 00:21:01.971 }' 00:21:01.971 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.971 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:01.971 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:02.232 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=713 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:02.232 "name": "raid_bdev1", 00:21:02.232 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:02.232 "strip_size_kb": 0, 00:21:02.232 "state": "online", 00:21:02.232 "raid_level": "raid1", 00:21:02.232 "superblock": true, 00:21:02.232 "num_base_bdevs": 2, 00:21:02.232 "num_base_bdevs_discovered": 2, 00:21:02.232 "num_base_bdevs_operational": 2, 00:21:02.232 "process": { 00:21:02.232 "type": "rebuild", 00:21:02.232 "target": "spare", 00:21:02.232 "progress": { 00:21:02.232 "blocks": 2816, 00:21:02.232 "percent": 35 00:21:02.232 } 00:21:02.232 }, 00:21:02.232 "base_bdevs_list": [ 00:21:02.232 { 00:21:02.232 "name": "spare", 00:21:02.232 "uuid": "1730310a-b283-5eaa-9a29-c32867e184b7", 00:21:02.232 "is_configured": true, 00:21:02.232 "data_offset": 256, 00:21:02.232 "data_size": 7936 00:21:02.232 }, 00:21:02.232 { 00:21:02.232 "name": "BaseBdev2", 00:21:02.232 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:02.232 "is_configured": true, 00:21:02.232 "data_offset": 256, 00:21:02.232 "data_size": 7936 00:21:02.232 } 00:21:02.232 ] 00:21:02.232 }' 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.232 15:47:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:03.166 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:03.166 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:03.166 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:03.166 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:03.166 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:03.166 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:03.166 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.425 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.425 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.425 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:03.425 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.425 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:03.425 "name": "raid_bdev1", 00:21:03.425 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:03.425 "strip_size_kb": 0, 00:21:03.425 "state": "online", 00:21:03.425 "raid_level": "raid1", 00:21:03.425 "superblock": true, 00:21:03.425 "num_base_bdevs": 2, 00:21:03.425 "num_base_bdevs_discovered": 2, 00:21:03.425 "num_base_bdevs_operational": 2, 00:21:03.425 "process": { 00:21:03.425 "type": "rebuild", 00:21:03.425 "target": "spare", 00:21:03.425 "progress": { 00:21:03.425 "blocks": 5632, 00:21:03.425 "percent": 70 00:21:03.425 } 00:21:03.425 }, 00:21:03.425 "base_bdevs_list": [ 00:21:03.425 { 00:21:03.425 "name": "spare", 00:21:03.425 "uuid": "1730310a-b283-5eaa-9a29-c32867e184b7", 00:21:03.425 "is_configured": true, 00:21:03.425 "data_offset": 256, 00:21:03.425 "data_size": 7936 00:21:03.425 }, 00:21:03.425 { 00:21:03.425 "name": "BaseBdev2", 00:21:03.425 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:03.425 "is_configured": true, 00:21:03.425 "data_offset": 256, 00:21:03.425 "data_size": 7936 00:21:03.425 } 00:21:03.425 ] 00:21:03.425 }' 00:21:03.425 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:03.425 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:03.425 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:03.425 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.425 15:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:03.993 [2024-12-06 15:47:47.280602] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:03.993 [2024-12-06 15:47:47.280695] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:03.993 [2024-12-06 15:47:47.280817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.562 "name": "raid_bdev1", 00:21:04.562 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:04.562 "strip_size_kb": 0, 00:21:04.562 "state": "online", 00:21:04.562 "raid_level": "raid1", 00:21:04.562 "superblock": true, 00:21:04.562 "num_base_bdevs": 2, 00:21:04.562 "num_base_bdevs_discovered": 2, 00:21:04.562 "num_base_bdevs_operational": 2, 00:21:04.562 "base_bdevs_list": [ 00:21:04.562 { 00:21:04.562 "name": "spare", 00:21:04.562 "uuid": "1730310a-b283-5eaa-9a29-c32867e184b7", 00:21:04.562 "is_configured": true, 00:21:04.562 "data_offset": 256, 00:21:04.562 "data_size": 7936 00:21:04.562 }, 00:21:04.562 { 00:21:04.562 "name": "BaseBdev2", 00:21:04.562 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:04.562 "is_configured": true, 00:21:04.562 "data_offset": 256, 00:21:04.562 "data_size": 7936 00:21:04.562 } 00:21:04.562 ] 00:21:04.562 }' 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.562 "name": "raid_bdev1", 00:21:04.562 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:04.562 "strip_size_kb": 0, 00:21:04.562 "state": "online", 00:21:04.562 "raid_level": "raid1", 00:21:04.562 "superblock": true, 00:21:04.562 "num_base_bdevs": 2, 00:21:04.562 "num_base_bdevs_discovered": 2, 00:21:04.562 "num_base_bdevs_operational": 2, 00:21:04.562 "base_bdevs_list": [ 00:21:04.562 { 00:21:04.562 "name": "spare", 00:21:04.562 "uuid": "1730310a-b283-5eaa-9a29-c32867e184b7", 00:21:04.562 "is_configured": true, 00:21:04.562 "data_offset": 256, 00:21:04.562 "data_size": 7936 00:21:04.562 }, 00:21:04.562 { 00:21:04.562 "name": "BaseBdev2", 00:21:04.562 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:04.562 "is_configured": true, 00:21:04.562 "data_offset": 256, 00:21:04.562 "data_size": 7936 00:21:04.562 } 00:21:04.562 ] 00:21:04.562 }' 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.562 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.563 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.563 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:04.563 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.563 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.563 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.563 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.822 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.822 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.822 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.822 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.822 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.822 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.822 "name": "raid_bdev1", 00:21:04.822 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:04.822 "strip_size_kb": 0, 00:21:04.822 "state": "online", 00:21:04.822 "raid_level": "raid1", 00:21:04.822 "superblock": true, 00:21:04.822 "num_base_bdevs": 2, 00:21:04.822 "num_base_bdevs_discovered": 2, 00:21:04.822 "num_base_bdevs_operational": 2, 00:21:04.822 "base_bdevs_list": [ 00:21:04.822 { 00:21:04.822 "name": "spare", 00:21:04.822 "uuid": "1730310a-b283-5eaa-9a29-c32867e184b7", 00:21:04.822 "is_configured": true, 00:21:04.822 "data_offset": 256, 00:21:04.822 "data_size": 7936 00:21:04.822 }, 00:21:04.822 { 00:21:04.822 "name": "BaseBdev2", 00:21:04.822 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:04.822 "is_configured": true, 00:21:04.822 "data_offset": 256, 00:21:04.822 "data_size": 7936 00:21:04.822 } 00:21:04.822 ] 00:21:04.822 }' 00:21:04.822 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.822 15:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.081 [2024-12-06 15:47:48.261629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.081 [2024-12-06 15:47:48.261670] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:05.081 [2024-12-06 15:47:48.261792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:05.081 [2024-12-06 15:47:48.261878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:05.081 [2024-12-06 15:47:48.261892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:05.081 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:05.341 /dev/nbd0 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:05.341 1+0 records in 00:21:05.341 1+0 records out 00:21:05.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348903 s, 11.7 MB/s 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:05.341 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:05.600 /dev/nbd1 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:05.600 1+0 records in 00:21:05.600 1+0 records out 00:21:05.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433793 s, 9.4 MB/s 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:05.600 15:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:05.860 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:05.860 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:05.860 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:05.860 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:05.860 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:05.860 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:05.860 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:06.119 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:06.119 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:06.119 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:06.119 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:06.119 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:06.119 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:06.119 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:06.119 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.119 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:06.119 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.380 [2024-12-06 15:47:49.572486] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:06.380 [2024-12-06 15:47:49.572564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.380 [2024-12-06 15:47:49.572595] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:06.380 [2024-12-06 15:47:49.572607] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.380 [2024-12-06 15:47:49.575263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.380 [2024-12-06 15:47:49.575316] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:06.380 [2024-12-06 15:47:49.575396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:06.380 [2024-12-06 15:47:49.575458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:06.380 [2024-12-06 15:47:49.575636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:06.380 spare 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.380 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.641 [2024-12-06 15:47:49.675574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:06.641 [2024-12-06 15:47:49.675607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:06.641 [2024-12-06 15:47:49.675721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:06.641 [2024-12-06 15:47:49.675893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:06.641 [2024-12-06 15:47:49.675906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:06.641 [2024-12-06 15:47:49.676057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.641 "name": "raid_bdev1", 00:21:06.641 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:06.641 "strip_size_kb": 0, 00:21:06.641 "state": "online", 00:21:06.641 "raid_level": "raid1", 00:21:06.641 "superblock": true, 00:21:06.641 "num_base_bdevs": 2, 00:21:06.641 "num_base_bdevs_discovered": 2, 00:21:06.641 "num_base_bdevs_operational": 2, 00:21:06.641 "base_bdevs_list": [ 00:21:06.641 { 00:21:06.641 "name": "spare", 00:21:06.641 "uuid": "1730310a-b283-5eaa-9a29-c32867e184b7", 00:21:06.641 "is_configured": true, 00:21:06.641 "data_offset": 256, 00:21:06.641 "data_size": 7936 00:21:06.641 }, 00:21:06.641 { 00:21:06.641 "name": "BaseBdev2", 00:21:06.641 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:06.641 "is_configured": true, 00:21:06.641 "data_offset": 256, 00:21:06.641 "data_size": 7936 00:21:06.641 } 00:21:06.641 ] 00:21:06.641 }' 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.641 15:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.900 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:06.900 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:06.900 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:06.900 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:06.900 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:06.900 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.900 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.900 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.900 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.900 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.900 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:06.900 "name": "raid_bdev1", 00:21:06.900 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:06.900 "strip_size_kb": 0, 00:21:06.900 "state": "online", 00:21:06.900 "raid_level": "raid1", 00:21:06.900 "superblock": true, 00:21:06.900 "num_base_bdevs": 2, 00:21:06.900 "num_base_bdevs_discovered": 2, 00:21:06.900 "num_base_bdevs_operational": 2, 00:21:06.900 "base_bdevs_list": [ 00:21:06.900 { 00:21:06.901 "name": "spare", 00:21:06.901 "uuid": "1730310a-b283-5eaa-9a29-c32867e184b7", 00:21:06.901 "is_configured": true, 00:21:06.901 "data_offset": 256, 00:21:06.901 "data_size": 7936 00:21:06.901 }, 00:21:06.901 { 00:21:06.901 "name": "BaseBdev2", 00:21:06.901 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:06.901 "is_configured": true, 00:21:06.901 "data_offset": 256, 00:21:06.901 "data_size": 7936 00:21:06.901 } 00:21:06.901 ] 00:21:06.901 }' 00:21:06.901 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:06.901 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:06.901 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.160 [2024-12-06 15:47:50.275563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.160 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.160 "name": "raid_bdev1", 00:21:07.160 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:07.160 "strip_size_kb": 0, 00:21:07.160 "state": "online", 00:21:07.160 "raid_level": "raid1", 00:21:07.160 "superblock": true, 00:21:07.160 "num_base_bdevs": 2, 00:21:07.160 "num_base_bdevs_discovered": 1, 00:21:07.160 "num_base_bdevs_operational": 1, 00:21:07.160 "base_bdevs_list": [ 00:21:07.160 { 00:21:07.160 "name": null, 00:21:07.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.161 "is_configured": false, 00:21:07.161 "data_offset": 0, 00:21:07.161 "data_size": 7936 00:21:07.161 }, 00:21:07.161 { 00:21:07.161 "name": "BaseBdev2", 00:21:07.161 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:07.161 "is_configured": true, 00:21:07.161 "data_offset": 256, 00:21:07.161 "data_size": 7936 00:21:07.161 } 00:21:07.161 ] 00:21:07.161 }' 00:21:07.161 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.161 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.420 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:07.420 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.420 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.420 [2024-12-06 15:47:50.663053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:07.420 [2024-12-06 15:47:50.663456] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:07.420 [2024-12-06 15:47:50.663488] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:07.420 [2024-12-06 15:47:50.663549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:07.420 [2024-12-06 15:47:50.678138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:07.420 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.420 15:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:07.420 [2024-12-06 15:47:50.680626] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:08.801 "name": "raid_bdev1", 00:21:08.801 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:08.801 "strip_size_kb": 0, 00:21:08.801 "state": "online", 00:21:08.801 "raid_level": "raid1", 00:21:08.801 "superblock": true, 00:21:08.801 "num_base_bdevs": 2, 00:21:08.801 "num_base_bdevs_discovered": 2, 00:21:08.801 "num_base_bdevs_operational": 2, 00:21:08.801 "process": { 00:21:08.801 "type": "rebuild", 00:21:08.801 "target": "spare", 00:21:08.801 "progress": { 00:21:08.801 "blocks": 2560, 00:21:08.801 "percent": 32 00:21:08.801 } 00:21:08.801 }, 00:21:08.801 "base_bdevs_list": [ 00:21:08.801 { 00:21:08.801 "name": "spare", 00:21:08.801 "uuid": "1730310a-b283-5eaa-9a29-c32867e184b7", 00:21:08.801 "is_configured": true, 00:21:08.801 "data_offset": 256, 00:21:08.801 "data_size": 7936 00:21:08.801 }, 00:21:08.801 { 00:21:08.801 "name": "BaseBdev2", 00:21:08.801 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:08.801 "is_configured": true, 00:21:08.801 "data_offset": 256, 00:21:08.801 "data_size": 7936 00:21:08.801 } 00:21:08.801 ] 00:21:08.801 }' 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:08.801 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.802 [2024-12-06 15:47:51.828962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:08.802 [2024-12-06 15:47:51.889624] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:08.802 [2024-12-06 15:47:51.889717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.802 [2024-12-06 15:47:51.889735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:08.802 [2024-12-06 15:47:51.889760] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.802 "name": "raid_bdev1", 00:21:08.802 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:08.802 "strip_size_kb": 0, 00:21:08.802 "state": "online", 00:21:08.802 "raid_level": "raid1", 00:21:08.802 "superblock": true, 00:21:08.802 "num_base_bdevs": 2, 00:21:08.802 "num_base_bdevs_discovered": 1, 00:21:08.802 "num_base_bdevs_operational": 1, 00:21:08.802 "base_bdevs_list": [ 00:21:08.802 { 00:21:08.802 "name": null, 00:21:08.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.802 "is_configured": false, 00:21:08.802 "data_offset": 0, 00:21:08.802 "data_size": 7936 00:21:08.802 }, 00:21:08.802 { 00:21:08.802 "name": "BaseBdev2", 00:21:08.802 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:08.802 "is_configured": true, 00:21:08.802 "data_offset": 256, 00:21:08.802 "data_size": 7936 00:21:08.802 } 00:21:08.802 ] 00:21:08.802 }' 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.802 15:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:09.062 15:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:09.062 15:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.062 15:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:09.062 [2024-12-06 15:47:52.315256] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:09.062 [2024-12-06 15:47:52.315333] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.062 [2024-12-06 15:47:52.315368] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:09.062 [2024-12-06 15:47:52.315384] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.062 [2024-12-06 15:47:52.315705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.062 [2024-12-06 15:47:52.315727] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:09.062 [2024-12-06 15:47:52.315799] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:09.062 [2024-12-06 15:47:52.315816] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:09.062 [2024-12-06 15:47:52.315828] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:09.062 [2024-12-06 15:47:52.315854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:09.062 spare 00:21:09.062 [2024-12-06 15:47:52.330944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:21:09.062 15:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.062 15:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:09.062 [2024-12-06 15:47:52.333380] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:10.442 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.443 "name": "raid_bdev1", 00:21:10.443 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:10.443 "strip_size_kb": 0, 00:21:10.443 "state": "online", 00:21:10.443 "raid_level": "raid1", 00:21:10.443 "superblock": true, 00:21:10.443 "num_base_bdevs": 2, 00:21:10.443 "num_base_bdevs_discovered": 2, 00:21:10.443 "num_base_bdevs_operational": 2, 00:21:10.443 "process": { 00:21:10.443 "type": "rebuild", 00:21:10.443 "target": "spare", 00:21:10.443 "progress": { 00:21:10.443 "blocks": 2560, 00:21:10.443 "percent": 32 00:21:10.443 } 00:21:10.443 }, 00:21:10.443 "base_bdevs_list": [ 00:21:10.443 { 00:21:10.443 "name": "spare", 00:21:10.443 "uuid": "1730310a-b283-5eaa-9a29-c32867e184b7", 00:21:10.443 "is_configured": true, 00:21:10.443 "data_offset": 256, 00:21:10.443 "data_size": 7936 00:21:10.443 }, 00:21:10.443 { 00:21:10.443 "name": "BaseBdev2", 00:21:10.443 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:10.443 "is_configured": true, 00:21:10.443 "data_offset": 256, 00:21:10.443 "data_size": 7936 00:21:10.443 } 00:21:10.443 ] 00:21:10.443 }' 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:10.443 [2024-12-06 15:47:53.462369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:10.443 [2024-12-06 15:47:53.542181] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:10.443 [2024-12-06 15:47:53.542251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.443 [2024-12-06 15:47:53.542288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:10.443 [2024-12-06 15:47:53.542297] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.443 "name": "raid_bdev1", 00:21:10.443 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:10.443 "strip_size_kb": 0, 00:21:10.443 "state": "online", 00:21:10.443 "raid_level": "raid1", 00:21:10.443 "superblock": true, 00:21:10.443 "num_base_bdevs": 2, 00:21:10.443 "num_base_bdevs_discovered": 1, 00:21:10.443 "num_base_bdevs_operational": 1, 00:21:10.443 "base_bdevs_list": [ 00:21:10.443 { 00:21:10.443 "name": null, 00:21:10.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.443 "is_configured": false, 00:21:10.443 "data_offset": 0, 00:21:10.443 "data_size": 7936 00:21:10.443 }, 00:21:10.443 { 00:21:10.443 "name": "BaseBdev2", 00:21:10.443 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:10.443 "is_configured": true, 00:21:10.443 "data_offset": 256, 00:21:10.443 "data_size": 7936 00:21:10.443 } 00:21:10.443 ] 00:21:10.443 }' 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.443 15:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.020 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:11.020 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:11.020 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:11.020 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:11.020 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:11.020 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.020 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:11.021 "name": "raid_bdev1", 00:21:11.021 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:11.021 "strip_size_kb": 0, 00:21:11.021 "state": "online", 00:21:11.021 "raid_level": "raid1", 00:21:11.021 "superblock": true, 00:21:11.021 "num_base_bdevs": 2, 00:21:11.021 "num_base_bdevs_discovered": 1, 00:21:11.021 "num_base_bdevs_operational": 1, 00:21:11.021 "base_bdevs_list": [ 00:21:11.021 { 00:21:11.021 "name": null, 00:21:11.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.021 "is_configured": false, 00:21:11.021 "data_offset": 0, 00:21:11.021 "data_size": 7936 00:21:11.021 }, 00:21:11.021 { 00:21:11.021 "name": "BaseBdev2", 00:21:11.021 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:11.021 "is_configured": true, 00:21:11.021 "data_offset": 256, 00:21:11.021 "data_size": 7936 00:21:11.021 } 00:21:11.021 ] 00:21:11.021 }' 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.021 [2024-12-06 15:47:54.146945] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:11.021 [2024-12-06 15:47:54.147013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.021 [2024-12-06 15:47:54.147042] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:11.021 [2024-12-06 15:47:54.147054] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.021 [2024-12-06 15:47:54.147347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.021 [2024-12-06 15:47:54.147361] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:11.021 [2024-12-06 15:47:54.147425] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:11.021 [2024-12-06 15:47:54.147442] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:11.021 [2024-12-06 15:47:54.147455] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:11.021 [2024-12-06 15:47:54.147469] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:11.021 BaseBdev1 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.021 15:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.997 "name": "raid_bdev1", 00:21:11.997 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:11.997 "strip_size_kb": 0, 00:21:11.997 "state": "online", 00:21:11.997 "raid_level": "raid1", 00:21:11.997 "superblock": true, 00:21:11.997 "num_base_bdevs": 2, 00:21:11.997 "num_base_bdevs_discovered": 1, 00:21:11.997 "num_base_bdevs_operational": 1, 00:21:11.997 "base_bdevs_list": [ 00:21:11.997 { 00:21:11.997 "name": null, 00:21:11.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.997 "is_configured": false, 00:21:11.997 "data_offset": 0, 00:21:11.997 "data_size": 7936 00:21:11.997 }, 00:21:11.997 { 00:21:11.997 "name": "BaseBdev2", 00:21:11.997 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:11.997 "is_configured": true, 00:21:11.997 "data_offset": 256, 00:21:11.997 "data_size": 7936 00:21:11.997 } 00:21:11.997 ] 00:21:11.997 }' 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.997 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:12.565 "name": "raid_bdev1", 00:21:12.565 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:12.565 "strip_size_kb": 0, 00:21:12.565 "state": "online", 00:21:12.565 "raid_level": "raid1", 00:21:12.565 "superblock": true, 00:21:12.565 "num_base_bdevs": 2, 00:21:12.565 "num_base_bdevs_discovered": 1, 00:21:12.565 "num_base_bdevs_operational": 1, 00:21:12.565 "base_bdevs_list": [ 00:21:12.565 { 00:21:12.565 "name": null, 00:21:12.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.565 "is_configured": false, 00:21:12.565 "data_offset": 0, 00:21:12.565 "data_size": 7936 00:21:12.565 }, 00:21:12.565 { 00:21:12.565 "name": "BaseBdev2", 00:21:12.565 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:12.565 "is_configured": true, 00:21:12.565 "data_offset": 256, 00:21:12.565 "data_size": 7936 00:21:12.565 } 00:21:12.565 ] 00:21:12.565 }' 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.565 [2024-12-06 15:47:55.696821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:12.565 [2024-12-06 15:47:55.697169] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:12.565 [2024-12-06 15:47:55.697318] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:12.565 request: 00:21:12.565 { 00:21:12.565 "base_bdev": "BaseBdev1", 00:21:12.565 "raid_bdev": "raid_bdev1", 00:21:12.565 "method": "bdev_raid_add_base_bdev", 00:21:12.565 "req_id": 1 00:21:12.565 } 00:21:12.565 Got JSON-RPC error response 00:21:12.565 response: 00:21:12.565 { 00:21:12.565 "code": -22, 00:21:12.565 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:12.565 } 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.565 15:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.502 "name": "raid_bdev1", 00:21:13.502 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:13.502 "strip_size_kb": 0, 00:21:13.502 "state": "online", 00:21:13.502 "raid_level": "raid1", 00:21:13.502 "superblock": true, 00:21:13.502 "num_base_bdevs": 2, 00:21:13.502 "num_base_bdevs_discovered": 1, 00:21:13.502 "num_base_bdevs_operational": 1, 00:21:13.502 "base_bdevs_list": [ 00:21:13.502 { 00:21:13.502 "name": null, 00:21:13.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.502 "is_configured": false, 00:21:13.502 "data_offset": 0, 00:21:13.502 "data_size": 7936 00:21:13.502 }, 00:21:13.502 { 00:21:13.502 "name": "BaseBdev2", 00:21:13.502 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:13.502 "is_configured": true, 00:21:13.502 "data_offset": 256, 00:21:13.502 "data_size": 7936 00:21:13.502 } 00:21:13.502 ] 00:21:13.502 }' 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.502 15:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.071 "name": "raid_bdev1", 00:21:14.071 "uuid": "00ec58df-231e-468b-aaa0-d6ea1ab694f1", 00:21:14.071 "strip_size_kb": 0, 00:21:14.071 "state": "online", 00:21:14.071 "raid_level": "raid1", 00:21:14.071 "superblock": true, 00:21:14.071 "num_base_bdevs": 2, 00:21:14.071 "num_base_bdevs_discovered": 1, 00:21:14.071 "num_base_bdevs_operational": 1, 00:21:14.071 "base_bdevs_list": [ 00:21:14.071 { 00:21:14.071 "name": null, 00:21:14.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.071 "is_configured": false, 00:21:14.071 "data_offset": 0, 00:21:14.071 "data_size": 7936 00:21:14.071 }, 00:21:14.071 { 00:21:14.071 "name": "BaseBdev2", 00:21:14.071 "uuid": "6d2bbfa3-584d-5238-8cdd-a7c228449b5c", 00:21:14.071 "is_configured": true, 00:21:14.071 "data_offset": 256, 00:21:14.071 "data_size": 7936 00:21:14.071 } 00:21:14.071 ] 00:21:14.071 }' 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87758 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87758 ']' 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87758 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87758 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.071 killing process with pid 87758 00:21:14.071 Received shutdown signal, test time was about 60.000000 seconds 00:21:14.071 00:21:14.071 Latency(us) 00:21:14.071 [2024-12-06T15:47:57.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.071 [2024-12-06T15:47:57.366Z] =================================================================================================================== 00:21:14.071 [2024-12-06T15:47:57.366Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87758' 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87758 00:21:14.071 [2024-12-06 15:47:57.325475] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:14.071 15:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87758 00:21:14.071 [2024-12-06 15:47:57.325643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:14.071 [2024-12-06 15:47:57.325715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:14.071 [2024-12-06 15:47:57.325731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:14.640 [2024-12-06 15:47:57.665158] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:15.577 15:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:21:15.577 00:21:15.577 real 0m19.864s 00:21:15.577 user 0m25.447s 00:21:15.577 sys 0m3.069s 00:21:15.577 15:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.578 15:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:15.578 ************************************ 00:21:15.578 END TEST raid_rebuild_test_sb_md_separate 00:21:15.578 ************************************ 00:21:15.838 15:47:58 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:21:15.838 15:47:58 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:21:15.838 15:47:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:15.838 15:47:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.838 15:47:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:15.838 ************************************ 00:21:15.838 START TEST raid_state_function_test_sb_md_interleaved 00:21:15.838 ************************************ 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88445 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88445' 00:21:15.838 Process raid pid: 88445 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88445 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88445 ']' 00:21:15.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.838 15:47:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.838 [2024-12-06 15:47:59.046671] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:21:15.838 [2024-12-06 15:47:59.047020] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.098 [2024-12-06 15:47:59.236626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.098 [2024-12-06 15:47:59.370752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.357 [2024-12-06 15:47:59.618173] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:16.357 [2024-12-06 15:47:59.618208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.616 [2024-12-06 15:47:59.885268] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:16.616 [2024-12-06 15:47:59.885482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:16.616 [2024-12-06 15:47:59.885590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:16.616 [2024-12-06 15:47:59.885639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.616 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.876 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.876 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.876 "name": "Existed_Raid", 00:21:16.876 "uuid": "11bc19f0-d55a-437a-88b4-aa477723ad52", 00:21:16.876 "strip_size_kb": 0, 00:21:16.876 "state": "configuring", 00:21:16.876 "raid_level": "raid1", 00:21:16.876 "superblock": true, 00:21:16.876 "num_base_bdevs": 2, 00:21:16.876 "num_base_bdevs_discovered": 0, 00:21:16.876 "num_base_bdevs_operational": 2, 00:21:16.876 "base_bdevs_list": [ 00:21:16.876 { 00:21:16.876 "name": "BaseBdev1", 00:21:16.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.876 "is_configured": false, 00:21:16.876 "data_offset": 0, 00:21:16.876 "data_size": 0 00:21:16.876 }, 00:21:16.876 { 00:21:16.876 "name": "BaseBdev2", 00:21:16.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.876 "is_configured": false, 00:21:16.876 "data_offset": 0, 00:21:16.876 "data_size": 0 00:21:16.876 } 00:21:16.876 ] 00:21:16.876 }' 00:21:16.876 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.876 15:47:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.135 [2024-12-06 15:48:00.292710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:17.135 [2024-12-06 15:48:00.292753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.135 [2024-12-06 15:48:00.304698] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:17.135 [2024-12-06 15:48:00.304748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:17.135 [2024-12-06 15:48:00.304760] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:17.135 [2024-12-06 15:48:00.304778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.135 [2024-12-06 15:48:00.360304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.135 BaseBdev1 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:21:17.135 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.136 [ 00:21:17.136 { 00:21:17.136 "name": "BaseBdev1", 00:21:17.136 "aliases": [ 00:21:17.136 "97ff34ed-c869-4a60-b86b-4c4347490f4a" 00:21:17.136 ], 00:21:17.136 "product_name": "Malloc disk", 00:21:17.136 "block_size": 4128, 00:21:17.136 "num_blocks": 8192, 00:21:17.136 "uuid": "97ff34ed-c869-4a60-b86b-4c4347490f4a", 00:21:17.136 "md_size": 32, 00:21:17.136 "md_interleave": true, 00:21:17.136 "dif_type": 0, 00:21:17.136 "assigned_rate_limits": { 00:21:17.136 "rw_ios_per_sec": 0, 00:21:17.136 "rw_mbytes_per_sec": 0, 00:21:17.136 "r_mbytes_per_sec": 0, 00:21:17.136 "w_mbytes_per_sec": 0 00:21:17.136 }, 00:21:17.136 "claimed": true, 00:21:17.136 "claim_type": "exclusive_write", 00:21:17.136 "zoned": false, 00:21:17.136 "supported_io_types": { 00:21:17.136 "read": true, 00:21:17.136 "write": true, 00:21:17.136 "unmap": true, 00:21:17.136 "flush": true, 00:21:17.136 "reset": true, 00:21:17.136 "nvme_admin": false, 00:21:17.136 "nvme_io": false, 00:21:17.136 "nvme_io_md": false, 00:21:17.136 "write_zeroes": true, 00:21:17.136 "zcopy": true, 00:21:17.136 "get_zone_info": false, 00:21:17.136 "zone_management": false, 00:21:17.136 "zone_append": false, 00:21:17.136 "compare": false, 00:21:17.136 "compare_and_write": false, 00:21:17.136 "abort": true, 00:21:17.136 "seek_hole": false, 00:21:17.136 "seek_data": false, 00:21:17.136 "copy": true, 00:21:17.136 "nvme_iov_md": false 00:21:17.136 }, 00:21:17.136 "memory_domains": [ 00:21:17.136 { 00:21:17.136 "dma_device_id": "system", 00:21:17.136 "dma_device_type": 1 00:21:17.136 }, 00:21:17.136 { 00:21:17.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.136 "dma_device_type": 2 00:21:17.136 } 00:21:17.136 ], 00:21:17.136 "driver_specific": {} 00:21:17.136 } 00:21:17.136 ] 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.136 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.395 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.395 "name": "Existed_Raid", 00:21:17.395 "uuid": "4873b297-a516-428f-b1cd-35174e1ec579", 00:21:17.395 "strip_size_kb": 0, 00:21:17.395 "state": "configuring", 00:21:17.395 "raid_level": "raid1", 00:21:17.395 "superblock": true, 00:21:17.395 "num_base_bdevs": 2, 00:21:17.395 "num_base_bdevs_discovered": 1, 00:21:17.395 "num_base_bdevs_operational": 2, 00:21:17.395 "base_bdevs_list": [ 00:21:17.395 { 00:21:17.395 "name": "BaseBdev1", 00:21:17.395 "uuid": "97ff34ed-c869-4a60-b86b-4c4347490f4a", 00:21:17.395 "is_configured": true, 00:21:17.395 "data_offset": 256, 00:21:17.395 "data_size": 7936 00:21:17.395 }, 00:21:17.395 { 00:21:17.395 "name": "BaseBdev2", 00:21:17.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.395 "is_configured": false, 00:21:17.395 "data_offset": 0, 00:21:17.395 "data_size": 0 00:21:17.395 } 00:21:17.395 ] 00:21:17.395 }' 00:21:17.395 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.395 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.655 [2024-12-06 15:48:00.851713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:17.655 [2024-12-06 15:48:00.851901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.655 [2024-12-06 15:48:00.863758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.655 [2024-12-06 15:48:00.866284] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:17.655 [2024-12-06 15:48:00.866441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.655 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.656 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.656 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.656 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.656 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.656 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.656 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.656 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.656 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.656 "name": "Existed_Raid", 00:21:17.656 "uuid": "fb3715b3-faac-456a-88eb-391f88ead608", 00:21:17.656 "strip_size_kb": 0, 00:21:17.656 "state": "configuring", 00:21:17.656 "raid_level": "raid1", 00:21:17.656 "superblock": true, 00:21:17.656 "num_base_bdevs": 2, 00:21:17.656 "num_base_bdevs_discovered": 1, 00:21:17.656 "num_base_bdevs_operational": 2, 00:21:17.656 "base_bdevs_list": [ 00:21:17.656 { 00:21:17.656 "name": "BaseBdev1", 00:21:17.656 "uuid": "97ff34ed-c869-4a60-b86b-4c4347490f4a", 00:21:17.656 "is_configured": true, 00:21:17.656 "data_offset": 256, 00:21:17.656 "data_size": 7936 00:21:17.656 }, 00:21:17.656 { 00:21:17.656 "name": "BaseBdev2", 00:21:17.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.656 "is_configured": false, 00:21:17.656 "data_offset": 0, 00:21:17.656 "data_size": 0 00:21:17.656 } 00:21:17.656 ] 00:21:17.656 }' 00:21:17.656 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.656 15:48:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.223 [2024-12-06 15:48:01.313532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:18.223 [2024-12-06 15:48:01.313989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:18.223 [2024-12-06 15:48:01.314108] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:18.223 [2024-12-06 15:48:01.314262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:18.223 BaseBdev2 00:21:18.223 [2024-12-06 15:48:01.314448] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:18.223 [2024-12-06 15:48:01.314469] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:18.223 [2024-12-06 15:48:01.314570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.223 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.223 [ 00:21:18.223 { 00:21:18.223 "name": "BaseBdev2", 00:21:18.223 "aliases": [ 00:21:18.223 "c309b46b-9e57-47b0-ad92-8cde3000f8a1" 00:21:18.223 ], 00:21:18.223 "product_name": "Malloc disk", 00:21:18.223 "block_size": 4128, 00:21:18.223 "num_blocks": 8192, 00:21:18.223 "uuid": "c309b46b-9e57-47b0-ad92-8cde3000f8a1", 00:21:18.223 "md_size": 32, 00:21:18.223 "md_interleave": true, 00:21:18.223 "dif_type": 0, 00:21:18.223 "assigned_rate_limits": { 00:21:18.223 "rw_ios_per_sec": 0, 00:21:18.223 "rw_mbytes_per_sec": 0, 00:21:18.223 "r_mbytes_per_sec": 0, 00:21:18.223 "w_mbytes_per_sec": 0 00:21:18.223 }, 00:21:18.223 "claimed": true, 00:21:18.223 "claim_type": "exclusive_write", 00:21:18.223 "zoned": false, 00:21:18.223 "supported_io_types": { 00:21:18.224 "read": true, 00:21:18.224 "write": true, 00:21:18.224 "unmap": true, 00:21:18.224 "flush": true, 00:21:18.224 "reset": true, 00:21:18.224 "nvme_admin": false, 00:21:18.224 "nvme_io": false, 00:21:18.224 "nvme_io_md": false, 00:21:18.224 "write_zeroes": true, 00:21:18.224 "zcopy": true, 00:21:18.224 "get_zone_info": false, 00:21:18.224 "zone_management": false, 00:21:18.224 "zone_append": false, 00:21:18.224 "compare": false, 00:21:18.224 "compare_and_write": false, 00:21:18.224 "abort": true, 00:21:18.224 "seek_hole": false, 00:21:18.224 "seek_data": false, 00:21:18.224 "copy": true, 00:21:18.224 "nvme_iov_md": false 00:21:18.224 }, 00:21:18.224 "memory_domains": [ 00:21:18.224 { 00:21:18.224 "dma_device_id": "system", 00:21:18.224 "dma_device_type": 1 00:21:18.224 }, 00:21:18.224 { 00:21:18.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.224 "dma_device_type": 2 00:21:18.224 } 00:21:18.224 ], 00:21:18.224 "driver_specific": {} 00:21:18.224 } 00:21:18.224 ] 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.224 "name": "Existed_Raid", 00:21:18.224 "uuid": "fb3715b3-faac-456a-88eb-391f88ead608", 00:21:18.224 "strip_size_kb": 0, 00:21:18.224 "state": "online", 00:21:18.224 "raid_level": "raid1", 00:21:18.224 "superblock": true, 00:21:18.224 "num_base_bdevs": 2, 00:21:18.224 "num_base_bdevs_discovered": 2, 00:21:18.224 "num_base_bdevs_operational": 2, 00:21:18.224 "base_bdevs_list": [ 00:21:18.224 { 00:21:18.224 "name": "BaseBdev1", 00:21:18.224 "uuid": "97ff34ed-c869-4a60-b86b-4c4347490f4a", 00:21:18.224 "is_configured": true, 00:21:18.224 "data_offset": 256, 00:21:18.224 "data_size": 7936 00:21:18.224 }, 00:21:18.224 { 00:21:18.224 "name": "BaseBdev2", 00:21:18.224 "uuid": "c309b46b-9e57-47b0-ad92-8cde3000f8a1", 00:21:18.224 "is_configured": true, 00:21:18.224 "data_offset": 256, 00:21:18.224 "data_size": 7936 00:21:18.224 } 00:21:18.224 ] 00:21:18.224 }' 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.224 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.482 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:18.482 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:18.482 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:18.482 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:18.482 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:18.482 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.741 [2024-12-06 15:48:01.785202] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:18.741 "name": "Existed_Raid", 00:21:18.741 "aliases": [ 00:21:18.741 "fb3715b3-faac-456a-88eb-391f88ead608" 00:21:18.741 ], 00:21:18.741 "product_name": "Raid Volume", 00:21:18.741 "block_size": 4128, 00:21:18.741 "num_blocks": 7936, 00:21:18.741 "uuid": "fb3715b3-faac-456a-88eb-391f88ead608", 00:21:18.741 "md_size": 32, 00:21:18.741 "md_interleave": true, 00:21:18.741 "dif_type": 0, 00:21:18.741 "assigned_rate_limits": { 00:21:18.741 "rw_ios_per_sec": 0, 00:21:18.741 "rw_mbytes_per_sec": 0, 00:21:18.741 "r_mbytes_per_sec": 0, 00:21:18.741 "w_mbytes_per_sec": 0 00:21:18.741 }, 00:21:18.741 "claimed": false, 00:21:18.741 "zoned": false, 00:21:18.741 "supported_io_types": { 00:21:18.741 "read": true, 00:21:18.741 "write": true, 00:21:18.741 "unmap": false, 00:21:18.741 "flush": false, 00:21:18.741 "reset": true, 00:21:18.741 "nvme_admin": false, 00:21:18.741 "nvme_io": false, 00:21:18.741 "nvme_io_md": false, 00:21:18.741 "write_zeroes": true, 00:21:18.741 "zcopy": false, 00:21:18.741 "get_zone_info": false, 00:21:18.741 "zone_management": false, 00:21:18.741 "zone_append": false, 00:21:18.741 "compare": false, 00:21:18.741 "compare_and_write": false, 00:21:18.741 "abort": false, 00:21:18.741 "seek_hole": false, 00:21:18.741 "seek_data": false, 00:21:18.741 "copy": false, 00:21:18.741 "nvme_iov_md": false 00:21:18.741 }, 00:21:18.741 "memory_domains": [ 00:21:18.741 { 00:21:18.741 "dma_device_id": "system", 00:21:18.741 "dma_device_type": 1 00:21:18.741 }, 00:21:18.741 { 00:21:18.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.741 "dma_device_type": 2 00:21:18.741 }, 00:21:18.741 { 00:21:18.741 "dma_device_id": "system", 00:21:18.741 "dma_device_type": 1 00:21:18.741 }, 00:21:18.741 { 00:21:18.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.741 "dma_device_type": 2 00:21:18.741 } 00:21:18.741 ], 00:21:18.741 "driver_specific": { 00:21:18.741 "raid": { 00:21:18.741 "uuid": "fb3715b3-faac-456a-88eb-391f88ead608", 00:21:18.741 "strip_size_kb": 0, 00:21:18.741 "state": "online", 00:21:18.741 "raid_level": "raid1", 00:21:18.741 "superblock": true, 00:21:18.741 "num_base_bdevs": 2, 00:21:18.741 "num_base_bdevs_discovered": 2, 00:21:18.741 "num_base_bdevs_operational": 2, 00:21:18.741 "base_bdevs_list": [ 00:21:18.741 { 00:21:18.741 "name": "BaseBdev1", 00:21:18.741 "uuid": "97ff34ed-c869-4a60-b86b-4c4347490f4a", 00:21:18.741 "is_configured": true, 00:21:18.741 "data_offset": 256, 00:21:18.741 "data_size": 7936 00:21:18.741 }, 00:21:18.741 { 00:21:18.741 "name": "BaseBdev2", 00:21:18.741 "uuid": "c309b46b-9e57-47b0-ad92-8cde3000f8a1", 00:21:18.741 "is_configured": true, 00:21:18.741 "data_offset": 256, 00:21:18.741 "data_size": 7936 00:21:18.741 } 00:21:18.741 ] 00:21:18.741 } 00:21:18.741 } 00:21:18.741 }' 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:18.741 BaseBdev2' 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.741 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:18.742 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.742 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.742 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.742 15:48:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.742 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:18.742 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:18.742 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:18.742 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.742 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.742 [2024-12-06 15:48:02.020687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:19.000 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.000 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:19.000 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.001 "name": "Existed_Raid", 00:21:19.001 "uuid": "fb3715b3-faac-456a-88eb-391f88ead608", 00:21:19.001 "strip_size_kb": 0, 00:21:19.001 "state": "online", 00:21:19.001 "raid_level": "raid1", 00:21:19.001 "superblock": true, 00:21:19.001 "num_base_bdevs": 2, 00:21:19.001 "num_base_bdevs_discovered": 1, 00:21:19.001 "num_base_bdevs_operational": 1, 00:21:19.001 "base_bdevs_list": [ 00:21:19.001 { 00:21:19.001 "name": null, 00:21:19.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.001 "is_configured": false, 00:21:19.001 "data_offset": 0, 00:21:19.001 "data_size": 7936 00:21:19.001 }, 00:21:19.001 { 00:21:19.001 "name": "BaseBdev2", 00:21:19.001 "uuid": "c309b46b-9e57-47b0-ad92-8cde3000f8a1", 00:21:19.001 "is_configured": true, 00:21:19.001 "data_offset": 256, 00:21:19.001 "data_size": 7936 00:21:19.001 } 00:21:19.001 ] 00:21:19.001 }' 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.001 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.260 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:19.260 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:19.260 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:19.260 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.260 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.260 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.520 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.521 [2024-12-06 15:48:02.592197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:19.521 [2024-12-06 15:48:02.592464] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.521 [2024-12-06 15:48:02.698333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.521 [2024-12-06 15:48:02.698629] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:19.521 [2024-12-06 15:48:02.698662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88445 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88445 ']' 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88445 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88445 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:19.521 killing process with pid 88445 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88445' 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88445 00:21:19.521 [2024-12-06 15:48:02.786980] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:19.521 15:48:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88445 00:21:19.521 [2024-12-06 15:48:02.805895] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:20.903 ************************************ 00:21:20.903 END TEST raid_state_function_test_sb_md_interleaved 00:21:20.903 ************************************ 00:21:20.903 15:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:21:20.903 00:21:20.903 real 0m5.117s 00:21:20.903 user 0m7.094s 00:21:20.903 sys 0m1.052s 00:21:20.903 15:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.903 15:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.903 15:48:04 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:21:20.903 15:48:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:20.903 15:48:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.903 15:48:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:20.903 ************************************ 00:21:20.903 START TEST raid_superblock_test_md_interleaved 00:21:20.903 ************************************ 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88697 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88697 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88697 ']' 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.903 15:48:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.163 [2024-12-06 15:48:04.236761] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:21:21.163 [2024-12-06 15:48:04.236907] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88697 ] 00:21:21.163 [2024-12-06 15:48:04.425229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.423 [2024-12-06 15:48:04.562402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.683 [2024-12-06 15:48:04.805659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:21.683 [2024-12-06 15:48:04.805750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.943 malloc1 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.943 [2024-12-06 15:48:05.126237] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:21.943 [2024-12-06 15:48:05.126308] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.943 [2024-12-06 15:48:05.126335] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:21.943 [2024-12-06 15:48:05.126348] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.943 [2024-12-06 15:48:05.128750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.943 [2024-12-06 15:48:05.128800] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:21.943 pt1 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.943 malloc2 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.943 [2024-12-06 15:48:05.189787] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:21.943 [2024-12-06 15:48:05.189976] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.943 [2024-12-06 15:48:05.190039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:21.943 [2024-12-06 15:48:05.190128] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.943 [2024-12-06 15:48:05.192693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.943 [2024-12-06 15:48:05.192839] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:21.943 pt2 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:21.943 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.944 [2024-12-06 15:48:05.201827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:21.944 [2024-12-06 15:48:05.204141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:21.944 [2024-12-06 15:48:05.204331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:21.944 [2024-12-06 15:48:05.204345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:21.944 [2024-12-06 15:48:05.204425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:21.944 [2024-12-06 15:48:05.204531] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:21.944 [2024-12-06 15:48:05.204547] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:21.944 [2024-12-06 15:48:05.204625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.944 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.203 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.203 "name": "raid_bdev1", 00:21:22.203 "uuid": "537b824a-202b-4b4f-90ee-ac31b4e66438", 00:21:22.203 "strip_size_kb": 0, 00:21:22.203 "state": "online", 00:21:22.203 "raid_level": "raid1", 00:21:22.203 "superblock": true, 00:21:22.203 "num_base_bdevs": 2, 00:21:22.203 "num_base_bdevs_discovered": 2, 00:21:22.203 "num_base_bdevs_operational": 2, 00:21:22.203 "base_bdevs_list": [ 00:21:22.203 { 00:21:22.203 "name": "pt1", 00:21:22.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:22.203 "is_configured": true, 00:21:22.203 "data_offset": 256, 00:21:22.203 "data_size": 7936 00:21:22.203 }, 00:21:22.203 { 00:21:22.203 "name": "pt2", 00:21:22.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.203 "is_configured": true, 00:21:22.203 "data_offset": 256, 00:21:22.203 "data_size": 7936 00:21:22.203 } 00:21:22.203 ] 00:21:22.203 }' 00:21:22.203 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.203 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.461 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:22.461 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:22.461 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:22.461 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:22.461 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:22.461 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:22.461 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:22.461 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:22.461 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.461 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.461 [2024-12-06 15:48:05.625571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.461 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.461 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:22.461 "name": "raid_bdev1", 00:21:22.461 "aliases": [ 00:21:22.461 "537b824a-202b-4b4f-90ee-ac31b4e66438" 00:21:22.461 ], 00:21:22.461 "product_name": "Raid Volume", 00:21:22.461 "block_size": 4128, 00:21:22.461 "num_blocks": 7936, 00:21:22.461 "uuid": "537b824a-202b-4b4f-90ee-ac31b4e66438", 00:21:22.461 "md_size": 32, 00:21:22.461 "md_interleave": true, 00:21:22.461 "dif_type": 0, 00:21:22.461 "assigned_rate_limits": { 00:21:22.461 "rw_ios_per_sec": 0, 00:21:22.461 "rw_mbytes_per_sec": 0, 00:21:22.461 "r_mbytes_per_sec": 0, 00:21:22.461 "w_mbytes_per_sec": 0 00:21:22.461 }, 00:21:22.461 "claimed": false, 00:21:22.461 "zoned": false, 00:21:22.461 "supported_io_types": { 00:21:22.461 "read": true, 00:21:22.461 "write": true, 00:21:22.461 "unmap": false, 00:21:22.461 "flush": false, 00:21:22.461 "reset": true, 00:21:22.461 "nvme_admin": false, 00:21:22.461 "nvme_io": false, 00:21:22.461 "nvme_io_md": false, 00:21:22.461 "write_zeroes": true, 00:21:22.461 "zcopy": false, 00:21:22.461 "get_zone_info": false, 00:21:22.461 "zone_management": false, 00:21:22.461 "zone_append": false, 00:21:22.461 "compare": false, 00:21:22.462 "compare_and_write": false, 00:21:22.462 "abort": false, 00:21:22.462 "seek_hole": false, 00:21:22.462 "seek_data": false, 00:21:22.462 "copy": false, 00:21:22.462 "nvme_iov_md": false 00:21:22.462 }, 00:21:22.462 "memory_domains": [ 00:21:22.462 { 00:21:22.462 "dma_device_id": "system", 00:21:22.462 "dma_device_type": 1 00:21:22.462 }, 00:21:22.462 { 00:21:22.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.462 "dma_device_type": 2 00:21:22.462 }, 00:21:22.462 { 00:21:22.462 "dma_device_id": "system", 00:21:22.462 "dma_device_type": 1 00:21:22.462 }, 00:21:22.462 { 00:21:22.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.462 "dma_device_type": 2 00:21:22.462 } 00:21:22.462 ], 00:21:22.462 "driver_specific": { 00:21:22.462 "raid": { 00:21:22.462 "uuid": "537b824a-202b-4b4f-90ee-ac31b4e66438", 00:21:22.462 "strip_size_kb": 0, 00:21:22.462 "state": "online", 00:21:22.462 "raid_level": "raid1", 00:21:22.462 "superblock": true, 00:21:22.462 "num_base_bdevs": 2, 00:21:22.462 "num_base_bdevs_discovered": 2, 00:21:22.462 "num_base_bdevs_operational": 2, 00:21:22.462 "base_bdevs_list": [ 00:21:22.462 { 00:21:22.462 "name": "pt1", 00:21:22.462 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:22.462 "is_configured": true, 00:21:22.462 "data_offset": 256, 00:21:22.462 "data_size": 7936 00:21:22.462 }, 00:21:22.462 { 00:21:22.462 "name": "pt2", 00:21:22.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.462 "is_configured": true, 00:21:22.462 "data_offset": 256, 00:21:22.462 "data_size": 7936 00:21:22.462 } 00:21:22.462 ] 00:21:22.462 } 00:21:22.462 } 00:21:22.462 }' 00:21:22.462 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:22.462 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:22.462 pt2' 00:21:22.462 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.462 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:22.462 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:22.462 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:22.462 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.462 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.462 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.462 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:22.721 [2024-12-06 15:48:05.833161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=537b824a-202b-4b4f-90ee-ac31b4e66438 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 537b824a-202b-4b4f-90ee-ac31b4e66438 ']' 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.721 [2024-12-06 15:48:05.872823] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:22.721 [2024-12-06 15:48:05.872942] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:22.721 [2024-12-06 15:48:05.873095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:22.721 [2024-12-06 15:48:05.873193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:22.721 [2024-12-06 15:48:05.873418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.721 15:48:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.721 [2024-12-06 15:48:06.004669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:22.721 [2024-12-06 15:48:06.007184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:22.721 [2024-12-06 15:48:06.007264] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:22.721 [2024-12-06 15:48:06.007328] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:22.721 [2024-12-06 15:48:06.007346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:22.721 [2024-12-06 15:48:06.007358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:22.721 request: 00:21:22.721 { 00:21:22.721 "name": "raid_bdev1", 00:21:22.721 "raid_level": "raid1", 00:21:22.721 "base_bdevs": [ 00:21:22.721 "malloc1", 00:21:22.721 "malloc2" 00:21:22.721 ], 00:21:22.721 "superblock": false, 00:21:22.721 "method": "bdev_raid_create", 00:21:22.721 "req_id": 1 00:21:22.721 } 00:21:22.721 Got JSON-RPC error response 00:21:22.721 response: 00:21:22.721 { 00:21:22.980 "code": -17, 00:21:22.980 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:22.980 } 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.980 [2024-12-06 15:48:06.060600] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:22.980 [2024-12-06 15:48:06.060754] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.980 [2024-12-06 15:48:06.060780] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:22.980 [2024-12-06 15:48:06.060794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.980 [2024-12-06 15:48:06.063284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.980 [2024-12-06 15:48:06.063333] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:22.980 [2024-12-06 15:48:06.063387] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:22.980 [2024-12-06 15:48:06.063454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:22.980 pt1 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.980 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.980 "name": "raid_bdev1", 00:21:22.980 "uuid": "537b824a-202b-4b4f-90ee-ac31b4e66438", 00:21:22.980 "strip_size_kb": 0, 00:21:22.980 "state": "configuring", 00:21:22.980 "raid_level": "raid1", 00:21:22.980 "superblock": true, 00:21:22.980 "num_base_bdevs": 2, 00:21:22.980 "num_base_bdevs_discovered": 1, 00:21:22.980 "num_base_bdevs_operational": 2, 00:21:22.980 "base_bdevs_list": [ 00:21:22.980 { 00:21:22.980 "name": "pt1", 00:21:22.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:22.980 "is_configured": true, 00:21:22.980 "data_offset": 256, 00:21:22.980 "data_size": 7936 00:21:22.981 }, 00:21:22.981 { 00:21:22.981 "name": null, 00:21:22.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.981 "is_configured": false, 00:21:22.981 "data_offset": 256, 00:21:22.981 "data_size": 7936 00:21:22.981 } 00:21:22.981 ] 00:21:22.981 }' 00:21:22.981 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.981 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.240 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:23.240 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:23.240 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:23.240 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:23.240 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.240 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.240 [2024-12-06 15:48:06.484010] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:23.241 [2024-12-06 15:48:06.484096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.241 [2024-12-06 15:48:06.484122] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:23.241 [2024-12-06 15:48:06.484138] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.241 [2024-12-06 15:48:06.484349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.241 [2024-12-06 15:48:06.484370] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:23.241 [2024-12-06 15:48:06.484427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:23.241 [2024-12-06 15:48:06.484456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:23.241 [2024-12-06 15:48:06.484579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:23.241 [2024-12-06 15:48:06.484595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:23.241 [2024-12-06 15:48:06.484676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:23.241 [2024-12-06 15:48:06.484748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:23.241 [2024-12-06 15:48:06.484757] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:23.241 [2024-12-06 15:48:06.484831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.241 pt2 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.241 "name": "raid_bdev1", 00:21:23.241 "uuid": "537b824a-202b-4b4f-90ee-ac31b4e66438", 00:21:23.241 "strip_size_kb": 0, 00:21:23.241 "state": "online", 00:21:23.241 "raid_level": "raid1", 00:21:23.241 "superblock": true, 00:21:23.241 "num_base_bdevs": 2, 00:21:23.241 "num_base_bdevs_discovered": 2, 00:21:23.241 "num_base_bdevs_operational": 2, 00:21:23.241 "base_bdevs_list": [ 00:21:23.241 { 00:21:23.241 "name": "pt1", 00:21:23.241 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:23.241 "is_configured": true, 00:21:23.241 "data_offset": 256, 00:21:23.241 "data_size": 7936 00:21:23.241 }, 00:21:23.241 { 00:21:23.241 "name": "pt2", 00:21:23.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.241 "is_configured": true, 00:21:23.241 "data_offset": 256, 00:21:23.241 "data_size": 7936 00:21:23.241 } 00:21:23.241 ] 00:21:23.241 }' 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.241 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.810 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:23.810 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:23.810 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:23.810 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:23.810 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:23.810 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:23.810 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:23.810 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:23.810 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.810 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.810 [2024-12-06 15:48:06.919732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:23.810 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.810 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:23.810 "name": "raid_bdev1", 00:21:23.810 "aliases": [ 00:21:23.810 "537b824a-202b-4b4f-90ee-ac31b4e66438" 00:21:23.810 ], 00:21:23.810 "product_name": "Raid Volume", 00:21:23.810 "block_size": 4128, 00:21:23.810 "num_blocks": 7936, 00:21:23.810 "uuid": "537b824a-202b-4b4f-90ee-ac31b4e66438", 00:21:23.810 "md_size": 32, 00:21:23.810 "md_interleave": true, 00:21:23.810 "dif_type": 0, 00:21:23.810 "assigned_rate_limits": { 00:21:23.810 "rw_ios_per_sec": 0, 00:21:23.810 "rw_mbytes_per_sec": 0, 00:21:23.810 "r_mbytes_per_sec": 0, 00:21:23.810 "w_mbytes_per_sec": 0 00:21:23.810 }, 00:21:23.810 "claimed": false, 00:21:23.810 "zoned": false, 00:21:23.810 "supported_io_types": { 00:21:23.810 "read": true, 00:21:23.810 "write": true, 00:21:23.810 "unmap": false, 00:21:23.810 "flush": false, 00:21:23.810 "reset": true, 00:21:23.810 "nvme_admin": false, 00:21:23.810 "nvme_io": false, 00:21:23.810 "nvme_io_md": false, 00:21:23.810 "write_zeroes": true, 00:21:23.810 "zcopy": false, 00:21:23.810 "get_zone_info": false, 00:21:23.810 "zone_management": false, 00:21:23.810 "zone_append": false, 00:21:23.810 "compare": false, 00:21:23.810 "compare_and_write": false, 00:21:23.810 "abort": false, 00:21:23.810 "seek_hole": false, 00:21:23.810 "seek_data": false, 00:21:23.810 "copy": false, 00:21:23.810 "nvme_iov_md": false 00:21:23.810 }, 00:21:23.810 "memory_domains": [ 00:21:23.810 { 00:21:23.810 "dma_device_id": "system", 00:21:23.810 "dma_device_type": 1 00:21:23.810 }, 00:21:23.810 { 00:21:23.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.810 "dma_device_type": 2 00:21:23.810 }, 00:21:23.810 { 00:21:23.810 "dma_device_id": "system", 00:21:23.810 "dma_device_type": 1 00:21:23.810 }, 00:21:23.810 { 00:21:23.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.810 "dma_device_type": 2 00:21:23.810 } 00:21:23.810 ], 00:21:23.810 "driver_specific": { 00:21:23.811 "raid": { 00:21:23.811 "uuid": "537b824a-202b-4b4f-90ee-ac31b4e66438", 00:21:23.811 "strip_size_kb": 0, 00:21:23.811 "state": "online", 00:21:23.811 "raid_level": "raid1", 00:21:23.811 "superblock": true, 00:21:23.811 "num_base_bdevs": 2, 00:21:23.811 "num_base_bdevs_discovered": 2, 00:21:23.811 "num_base_bdevs_operational": 2, 00:21:23.811 "base_bdevs_list": [ 00:21:23.811 { 00:21:23.811 "name": "pt1", 00:21:23.811 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:23.811 "is_configured": true, 00:21:23.811 "data_offset": 256, 00:21:23.811 "data_size": 7936 00:21:23.811 }, 00:21:23.811 { 00:21:23.811 "name": "pt2", 00:21:23.811 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.811 "is_configured": true, 00:21:23.811 "data_offset": 256, 00:21:23.811 "data_size": 7936 00:21:23.811 } 00:21:23.811 ] 00:21:23.811 } 00:21:23.811 } 00:21:23.811 }' 00:21:23.811 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:23.811 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:23.811 pt2' 00:21:23.811 15:48:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:23.811 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:23.811 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:23.811 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:23.811 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:23.811 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.811 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.811 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.811 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:23.811 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:23.811 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:23.811 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:23.811 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.811 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.811 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:23.811 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:24.070 [2024-12-06 15:48:07.119368] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 537b824a-202b-4b4f-90ee-ac31b4e66438 '!=' 537b824a-202b-4b4f-90ee-ac31b4e66438 ']' 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.070 [2024-12-06 15:48:07.163088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.070 "name": "raid_bdev1", 00:21:24.070 "uuid": "537b824a-202b-4b4f-90ee-ac31b4e66438", 00:21:24.070 "strip_size_kb": 0, 00:21:24.070 "state": "online", 00:21:24.070 "raid_level": "raid1", 00:21:24.070 "superblock": true, 00:21:24.070 "num_base_bdevs": 2, 00:21:24.070 "num_base_bdevs_discovered": 1, 00:21:24.070 "num_base_bdevs_operational": 1, 00:21:24.070 "base_bdevs_list": [ 00:21:24.070 { 00:21:24.070 "name": null, 00:21:24.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.070 "is_configured": false, 00:21:24.070 "data_offset": 0, 00:21:24.070 "data_size": 7936 00:21:24.070 }, 00:21:24.070 { 00:21:24.070 "name": "pt2", 00:21:24.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.070 "is_configured": true, 00:21:24.070 "data_offset": 256, 00:21:24.070 "data_size": 7936 00:21:24.070 } 00:21:24.070 ] 00:21:24.070 }' 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.070 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.330 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:24.330 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.330 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.330 [2024-12-06 15:48:07.594499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:24.330 [2024-12-06 15:48:07.594649] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:24.330 [2024-12-06 15:48:07.594754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:24.330 [2024-12-06 15:48:07.594812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:24.330 [2024-12-06 15:48:07.594827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:24.330 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.330 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.330 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.330 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.330 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:24.330 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.590 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.590 [2024-12-06 15:48:07.662407] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:24.590 [2024-12-06 15:48:07.662603] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.590 [2024-12-06 15:48:07.662661] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:24.590 [2024-12-06 15:48:07.662840] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.590 [2024-12-06 15:48:07.665442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.590 [2024-12-06 15:48:07.665588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:24.590 [2024-12-06 15:48:07.665733] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:24.590 [2024-12-06 15:48:07.665875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:24.590 [2024-12-06 15:48:07.665998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:24.590 [2024-12-06 15:48:07.666117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:24.590 [2024-12-06 15:48:07.666263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:24.590 [2024-12-06 15:48:07.666472] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:24.590 [2024-12-06 15:48:07.666525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:24.591 [2024-12-06 15:48:07.666735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.591 pt2 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.591 "name": "raid_bdev1", 00:21:24.591 "uuid": "537b824a-202b-4b4f-90ee-ac31b4e66438", 00:21:24.591 "strip_size_kb": 0, 00:21:24.591 "state": "online", 00:21:24.591 "raid_level": "raid1", 00:21:24.591 "superblock": true, 00:21:24.591 "num_base_bdevs": 2, 00:21:24.591 "num_base_bdevs_discovered": 1, 00:21:24.591 "num_base_bdevs_operational": 1, 00:21:24.591 "base_bdevs_list": [ 00:21:24.591 { 00:21:24.591 "name": null, 00:21:24.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.591 "is_configured": false, 00:21:24.591 "data_offset": 256, 00:21:24.591 "data_size": 7936 00:21:24.591 }, 00:21:24.591 { 00:21:24.591 "name": "pt2", 00:21:24.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.591 "is_configured": true, 00:21:24.591 "data_offset": 256, 00:21:24.591 "data_size": 7936 00:21:24.591 } 00:21:24.591 ] 00:21:24.591 }' 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.591 15:48:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.850 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:24.850 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.850 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.850 [2024-12-06 15:48:08.086199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:24.850 [2024-12-06 15:48:08.086233] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:24.850 [2024-12-06 15:48:08.086320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:24.850 [2024-12-06 15:48:08.086378] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:24.850 [2024-12-06 15:48:08.086391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:24.850 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.850 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.850 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.850 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.850 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:24.850 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.850 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:24.850 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:24.850 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:24.850 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:24.850 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.850 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.163 [2024-12-06 15:48:08.146136] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:25.163 [2024-12-06 15:48:08.146204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.163 [2024-12-06 15:48:08.146230] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:25.163 [2024-12-06 15:48:08.146242] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.163 [2024-12-06 15:48:08.148812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.163 [2024-12-06 15:48:08.148850] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:25.163 [2024-12-06 15:48:08.148914] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:25.163 [2024-12-06 15:48:08.148967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:25.163 [2024-12-06 15:48:08.149080] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:25.163 [2024-12-06 15:48:08.149092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:25.163 [2024-12-06 15:48:08.149113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:25.163 [2024-12-06 15:48:08.149167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:25.163 [2024-12-06 15:48:08.149246] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:25.163 [2024-12-06 15:48:08.149256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:25.163 [2024-12-06 15:48:08.149336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:25.163 [2024-12-06 15:48:08.149398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:25.163 [2024-12-06 15:48:08.149410] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:25.163 [2024-12-06 15:48:08.149482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.163 pt1 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.163 "name": "raid_bdev1", 00:21:25.163 "uuid": "537b824a-202b-4b4f-90ee-ac31b4e66438", 00:21:25.163 "strip_size_kb": 0, 00:21:25.163 "state": "online", 00:21:25.163 "raid_level": "raid1", 00:21:25.163 "superblock": true, 00:21:25.163 "num_base_bdevs": 2, 00:21:25.163 "num_base_bdevs_discovered": 1, 00:21:25.163 "num_base_bdevs_operational": 1, 00:21:25.163 "base_bdevs_list": [ 00:21:25.163 { 00:21:25.163 "name": null, 00:21:25.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.163 "is_configured": false, 00:21:25.163 "data_offset": 256, 00:21:25.163 "data_size": 7936 00:21:25.163 }, 00:21:25.163 { 00:21:25.163 "name": "pt2", 00:21:25.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:25.163 "is_configured": true, 00:21:25.163 "data_offset": 256, 00:21:25.163 "data_size": 7936 00:21:25.163 } 00:21:25.163 ] 00:21:25.163 }' 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.163 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.424 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:25.424 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:25.424 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.424 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.424 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.424 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:25.424 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:25.424 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:25.424 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.424 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.425 [2024-12-06 15:48:08.573825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:25.425 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.425 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 537b824a-202b-4b4f-90ee-ac31b4e66438 '!=' 537b824a-202b-4b4f-90ee-ac31b4e66438 ']' 00:21:25.425 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88697 00:21:25.425 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88697 ']' 00:21:25.425 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88697 00:21:25.425 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:21:25.425 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.425 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88697 00:21:25.425 killing process with pid 88697 00:21:25.425 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.425 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.425 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88697' 00:21:25.425 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88697 00:21:25.425 [2024-12-06 15:48:08.639288] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:25.425 [2024-12-06 15:48:08.639382] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:25.425 15:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88697 00:21:25.425 [2024-12-06 15:48:08.639436] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:25.425 [2024-12-06 15:48:08.639455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:25.683 [2024-12-06 15:48:08.861976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:27.061 15:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:21:27.061 00:21:27.061 real 0m5.931s 00:21:27.061 user 0m8.713s 00:21:27.061 sys 0m1.296s 00:21:27.061 15:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.061 15:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.061 ************************************ 00:21:27.061 END TEST raid_superblock_test_md_interleaved 00:21:27.061 ************************************ 00:21:27.061 15:48:10 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:21:27.061 15:48:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:27.061 15:48:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.061 15:48:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:27.061 ************************************ 00:21:27.061 START TEST raid_rebuild_test_sb_md_interleaved 00:21:27.061 ************************************ 00:21:27.061 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:21:27.061 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:27.061 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:27.061 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:27.061 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:27.061 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:21:27.061 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:27.061 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.061 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:27.061 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.061 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.061 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:27.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89021 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89021 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89021 ']' 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.062 15:48:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.062 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:27.062 Zero copy mechanism will not be used. 00:21:27.062 [2024-12-06 15:48:10.256302] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:21:27.062 [2024-12-06 15:48:10.256447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89021 ] 00:21:27.321 [2024-12-06 15:48:10.438974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.321 [2024-12-06 15:48:10.569411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.581 [2024-12-06 15:48:10.806264] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.581 [2024-12-06 15:48:10.806608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.839 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.839 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:21:27.839 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:27.839 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:21:27.839 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.839 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.839 BaseBdev1_malloc 00:21:27.839 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.839 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:27.839 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.839 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.099 [2024-12-06 15:48:11.132777] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:28.099 [2024-12-06 15:48:11.132973] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.099 [2024-12-06 15:48:11.133040] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:28.099 [2024-12-06 15:48:11.133132] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.099 [2024-12-06 15:48:11.135599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.099 [2024-12-06 15:48:11.135746] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:28.099 BaseBdev1 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.099 BaseBdev2_malloc 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.099 [2024-12-06 15:48:11.194492] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:28.099 [2024-12-06 15:48:11.194686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.099 [2024-12-06 15:48:11.194717] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:28.099 [2024-12-06 15:48:11.194734] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.099 [2024-12-06 15:48:11.197118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.099 [2024-12-06 15:48:11.197161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:28.099 BaseBdev2 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.099 spare_malloc 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.099 spare_delay 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.099 [2024-12-06 15:48:11.280267] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:28.099 [2024-12-06 15:48:11.280455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.099 [2024-12-06 15:48:11.280521] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:28.099 [2024-12-06 15:48:11.280616] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.099 [2024-12-06 15:48:11.283034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.099 [2024-12-06 15:48:11.283175] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:28.099 spare 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:28.099 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.100 [2024-12-06 15:48:11.292312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:28.100 [2024-12-06 15:48:11.294792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.100 [2024-12-06 15:48:11.295118] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:28.100 [2024-12-06 15:48:11.295170] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:28.100 [2024-12-06 15:48:11.295379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:28.100 [2024-12-06 15:48:11.295467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:28.100 [2024-12-06 15:48:11.295477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:28.100 [2024-12-06 15:48:11.295574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.100 "name": "raid_bdev1", 00:21:28.100 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:28.100 "strip_size_kb": 0, 00:21:28.100 "state": "online", 00:21:28.100 "raid_level": "raid1", 00:21:28.100 "superblock": true, 00:21:28.100 "num_base_bdevs": 2, 00:21:28.100 "num_base_bdevs_discovered": 2, 00:21:28.100 "num_base_bdevs_operational": 2, 00:21:28.100 "base_bdevs_list": [ 00:21:28.100 { 00:21:28.100 "name": "BaseBdev1", 00:21:28.100 "uuid": "c1b7590a-79e0-52c5-bc20-69ecf747ac17", 00:21:28.100 "is_configured": true, 00:21:28.100 "data_offset": 256, 00:21:28.100 "data_size": 7936 00:21:28.100 }, 00:21:28.100 { 00:21:28.100 "name": "BaseBdev2", 00:21:28.100 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:28.100 "is_configured": true, 00:21:28.100 "data_offset": 256, 00:21:28.100 "data_size": 7936 00:21:28.100 } 00:21:28.100 ] 00:21:28.100 }' 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.100 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.669 [2024-12-06 15:48:11.711988] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.669 [2024-12-06 15:48:11.787626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:28.669 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:28.670 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.670 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.670 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.670 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.670 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.670 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.670 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.670 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.670 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.670 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.670 "name": "raid_bdev1", 00:21:28.670 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:28.670 "strip_size_kb": 0, 00:21:28.670 "state": "online", 00:21:28.670 "raid_level": "raid1", 00:21:28.670 "superblock": true, 00:21:28.670 "num_base_bdevs": 2, 00:21:28.670 "num_base_bdevs_discovered": 1, 00:21:28.670 "num_base_bdevs_operational": 1, 00:21:28.670 "base_bdevs_list": [ 00:21:28.670 { 00:21:28.670 "name": null, 00:21:28.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.670 "is_configured": false, 00:21:28.670 "data_offset": 0, 00:21:28.670 "data_size": 7936 00:21:28.670 }, 00:21:28.670 { 00:21:28.670 "name": "BaseBdev2", 00:21:28.670 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:28.670 "is_configured": true, 00:21:28.670 "data_offset": 256, 00:21:28.670 "data_size": 7936 00:21:28.670 } 00:21:28.670 ] 00:21:28.670 }' 00:21:28.670 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.670 15:48:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.928 15:48:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:28.928 15:48:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.928 15:48:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.928 [2024-12-06 15:48:12.179137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:28.928 [2024-12-06 15:48:12.198587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:28.928 15:48:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.928 15:48:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:28.928 [2024-12-06 15:48:12.200999] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:30.331 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:30.331 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:30.331 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:30.331 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:30.331 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:30.331 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.331 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.331 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.331 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.331 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.331 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:30.331 "name": "raid_bdev1", 00:21:30.331 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:30.331 "strip_size_kb": 0, 00:21:30.331 "state": "online", 00:21:30.331 "raid_level": "raid1", 00:21:30.331 "superblock": true, 00:21:30.331 "num_base_bdevs": 2, 00:21:30.331 "num_base_bdevs_discovered": 2, 00:21:30.331 "num_base_bdevs_operational": 2, 00:21:30.331 "process": { 00:21:30.331 "type": "rebuild", 00:21:30.331 "target": "spare", 00:21:30.331 "progress": { 00:21:30.331 "blocks": 2560, 00:21:30.331 "percent": 32 00:21:30.331 } 00:21:30.331 }, 00:21:30.331 "base_bdevs_list": [ 00:21:30.331 { 00:21:30.331 "name": "spare", 00:21:30.331 "uuid": "8c28f6e7-369c-5e96-a92a-410c156e5bab", 00:21:30.332 "is_configured": true, 00:21:30.332 "data_offset": 256, 00:21:30.332 "data_size": 7936 00:21:30.332 }, 00:21:30.332 { 00:21:30.332 "name": "BaseBdev2", 00:21:30.332 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:30.332 "is_configured": true, 00:21:30.332 "data_offset": 256, 00:21:30.332 "data_size": 7936 00:21:30.332 } 00:21:30.332 ] 00:21:30.332 }' 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.332 [2024-12-06 15:48:13.344237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:30.332 [2024-12-06 15:48:13.409734] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:30.332 [2024-12-06 15:48:13.409947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.332 [2024-12-06 15:48:13.410065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:30.332 [2024-12-06 15:48:13.410094] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.332 "name": "raid_bdev1", 00:21:30.332 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:30.332 "strip_size_kb": 0, 00:21:30.332 "state": "online", 00:21:30.332 "raid_level": "raid1", 00:21:30.332 "superblock": true, 00:21:30.332 "num_base_bdevs": 2, 00:21:30.332 "num_base_bdevs_discovered": 1, 00:21:30.332 "num_base_bdevs_operational": 1, 00:21:30.332 "base_bdevs_list": [ 00:21:30.332 { 00:21:30.332 "name": null, 00:21:30.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.332 "is_configured": false, 00:21:30.332 "data_offset": 0, 00:21:30.332 "data_size": 7936 00:21:30.332 }, 00:21:30.332 { 00:21:30.332 "name": "BaseBdev2", 00:21:30.332 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:30.332 "is_configured": true, 00:21:30.332 "data_offset": 256, 00:21:30.332 "data_size": 7936 00:21:30.332 } 00:21:30.332 ] 00:21:30.332 }' 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.332 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.590 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:30.590 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:30.590 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:30.590 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:30.590 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:30.590 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.590 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.591 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.591 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.591 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.850 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:30.850 "name": "raid_bdev1", 00:21:30.850 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:30.850 "strip_size_kb": 0, 00:21:30.850 "state": "online", 00:21:30.850 "raid_level": "raid1", 00:21:30.850 "superblock": true, 00:21:30.850 "num_base_bdevs": 2, 00:21:30.850 "num_base_bdevs_discovered": 1, 00:21:30.850 "num_base_bdevs_operational": 1, 00:21:30.850 "base_bdevs_list": [ 00:21:30.850 { 00:21:30.850 "name": null, 00:21:30.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.850 "is_configured": false, 00:21:30.850 "data_offset": 0, 00:21:30.850 "data_size": 7936 00:21:30.850 }, 00:21:30.850 { 00:21:30.850 "name": "BaseBdev2", 00:21:30.850 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:30.850 "is_configured": true, 00:21:30.850 "data_offset": 256, 00:21:30.850 "data_size": 7936 00:21:30.850 } 00:21:30.850 ] 00:21:30.850 }' 00:21:30.850 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:30.850 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:30.850 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:30.850 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:30.850 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:30.850 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.850 15:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.850 [2024-12-06 15:48:13.995857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:30.850 [2024-12-06 15:48:14.014459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:30.850 15:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.850 15:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:30.850 [2024-12-06 15:48:14.017016] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:31.789 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:31.789 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:31.789 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:31.789 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:31.789 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:31.790 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.790 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.790 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.790 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:31.790 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.790 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:31.790 "name": "raid_bdev1", 00:21:31.790 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:31.790 "strip_size_kb": 0, 00:21:31.790 "state": "online", 00:21:31.790 "raid_level": "raid1", 00:21:31.790 "superblock": true, 00:21:31.790 "num_base_bdevs": 2, 00:21:31.790 "num_base_bdevs_discovered": 2, 00:21:31.790 "num_base_bdevs_operational": 2, 00:21:31.790 "process": { 00:21:31.790 "type": "rebuild", 00:21:31.790 "target": "spare", 00:21:31.790 "progress": { 00:21:31.790 "blocks": 2560, 00:21:31.790 "percent": 32 00:21:31.790 } 00:21:31.790 }, 00:21:31.790 "base_bdevs_list": [ 00:21:31.790 { 00:21:31.790 "name": "spare", 00:21:31.790 "uuid": "8c28f6e7-369c-5e96-a92a-410c156e5bab", 00:21:31.790 "is_configured": true, 00:21:31.790 "data_offset": 256, 00:21:31.790 "data_size": 7936 00:21:31.790 }, 00:21:31.790 { 00:21:31.790 "name": "BaseBdev2", 00:21:31.790 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:31.790 "is_configured": true, 00:21:31.790 "data_offset": 256, 00:21:31.790 "data_size": 7936 00:21:31.790 } 00:21:31.790 ] 00:21:31.790 }' 00:21:31.790 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:32.048 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:32.049 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=743 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:32.049 "name": "raid_bdev1", 00:21:32.049 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:32.049 "strip_size_kb": 0, 00:21:32.049 "state": "online", 00:21:32.049 "raid_level": "raid1", 00:21:32.049 "superblock": true, 00:21:32.049 "num_base_bdevs": 2, 00:21:32.049 "num_base_bdevs_discovered": 2, 00:21:32.049 "num_base_bdevs_operational": 2, 00:21:32.049 "process": { 00:21:32.049 "type": "rebuild", 00:21:32.049 "target": "spare", 00:21:32.049 "progress": { 00:21:32.049 "blocks": 2816, 00:21:32.049 "percent": 35 00:21:32.049 } 00:21:32.049 }, 00:21:32.049 "base_bdevs_list": [ 00:21:32.049 { 00:21:32.049 "name": "spare", 00:21:32.049 "uuid": "8c28f6e7-369c-5e96-a92a-410c156e5bab", 00:21:32.049 "is_configured": true, 00:21:32.049 "data_offset": 256, 00:21:32.049 "data_size": 7936 00:21:32.049 }, 00:21:32.049 { 00:21:32.049 "name": "BaseBdev2", 00:21:32.049 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:32.049 "is_configured": true, 00:21:32.049 "data_offset": 256, 00:21:32.049 "data_size": 7936 00:21:32.049 } 00:21:32.049 ] 00:21:32.049 }' 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.049 15:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.424 "name": "raid_bdev1", 00:21:33.424 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:33.424 "strip_size_kb": 0, 00:21:33.424 "state": "online", 00:21:33.424 "raid_level": "raid1", 00:21:33.424 "superblock": true, 00:21:33.424 "num_base_bdevs": 2, 00:21:33.424 "num_base_bdevs_discovered": 2, 00:21:33.424 "num_base_bdevs_operational": 2, 00:21:33.424 "process": { 00:21:33.424 "type": "rebuild", 00:21:33.424 "target": "spare", 00:21:33.424 "progress": { 00:21:33.424 "blocks": 5632, 00:21:33.424 "percent": 70 00:21:33.424 } 00:21:33.424 }, 00:21:33.424 "base_bdevs_list": [ 00:21:33.424 { 00:21:33.424 "name": "spare", 00:21:33.424 "uuid": "8c28f6e7-369c-5e96-a92a-410c156e5bab", 00:21:33.424 "is_configured": true, 00:21:33.424 "data_offset": 256, 00:21:33.424 "data_size": 7936 00:21:33.424 }, 00:21:33.424 { 00:21:33.424 "name": "BaseBdev2", 00:21:33.424 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:33.424 "is_configured": true, 00:21:33.424 "data_offset": 256, 00:21:33.424 "data_size": 7936 00:21:33.424 } 00:21:33.424 ] 00:21:33.424 }' 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.424 15:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:33.992 [2024-12-06 15:48:17.139294] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:33.992 [2024-12-06 15:48:17.139568] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:33.992 [2024-12-06 15:48:17.139707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.251 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:34.251 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:34.251 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:34.251 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:34.251 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:34.251 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:34.251 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.251 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.251 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.251 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:34.251 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.251 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:34.251 "name": "raid_bdev1", 00:21:34.251 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:34.251 "strip_size_kb": 0, 00:21:34.251 "state": "online", 00:21:34.251 "raid_level": "raid1", 00:21:34.251 "superblock": true, 00:21:34.251 "num_base_bdevs": 2, 00:21:34.251 "num_base_bdevs_discovered": 2, 00:21:34.251 "num_base_bdevs_operational": 2, 00:21:34.251 "base_bdevs_list": [ 00:21:34.251 { 00:21:34.251 "name": "spare", 00:21:34.251 "uuid": "8c28f6e7-369c-5e96-a92a-410c156e5bab", 00:21:34.251 "is_configured": true, 00:21:34.251 "data_offset": 256, 00:21:34.251 "data_size": 7936 00:21:34.251 }, 00:21:34.251 { 00:21:34.251 "name": "BaseBdev2", 00:21:34.251 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:34.251 "is_configured": true, 00:21:34.251 "data_offset": 256, 00:21:34.251 "data_size": 7936 00:21:34.251 } 00:21:34.251 ] 00:21:34.252 }' 00:21:34.252 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:34.252 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:34.252 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:34.252 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:34.252 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:21:34.252 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:34.252 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:34.252 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:34.252 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:34.252 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:34.252 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.252 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.252 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.252 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:34.511 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.511 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:34.511 "name": "raid_bdev1", 00:21:34.511 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:34.511 "strip_size_kb": 0, 00:21:34.511 "state": "online", 00:21:34.511 "raid_level": "raid1", 00:21:34.511 "superblock": true, 00:21:34.511 "num_base_bdevs": 2, 00:21:34.511 "num_base_bdevs_discovered": 2, 00:21:34.511 "num_base_bdevs_operational": 2, 00:21:34.511 "base_bdevs_list": [ 00:21:34.511 { 00:21:34.511 "name": "spare", 00:21:34.511 "uuid": "8c28f6e7-369c-5e96-a92a-410c156e5bab", 00:21:34.511 "is_configured": true, 00:21:34.511 "data_offset": 256, 00:21:34.511 "data_size": 7936 00:21:34.511 }, 00:21:34.511 { 00:21:34.511 "name": "BaseBdev2", 00:21:34.511 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:34.511 "is_configured": true, 00:21:34.511 "data_offset": 256, 00:21:34.511 "data_size": 7936 00:21:34.511 } 00:21:34.511 ] 00:21:34.511 }' 00:21:34.511 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:34.511 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:34.511 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:34.511 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:34.511 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:34.511 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:34.511 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:34.511 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:34.512 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:34.512 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:34.512 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.512 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.512 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.512 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.512 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.512 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.512 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:34.512 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.512 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.512 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.512 "name": "raid_bdev1", 00:21:34.512 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:34.512 "strip_size_kb": 0, 00:21:34.512 "state": "online", 00:21:34.512 "raid_level": "raid1", 00:21:34.512 "superblock": true, 00:21:34.512 "num_base_bdevs": 2, 00:21:34.512 "num_base_bdevs_discovered": 2, 00:21:34.512 "num_base_bdevs_operational": 2, 00:21:34.512 "base_bdevs_list": [ 00:21:34.512 { 00:21:34.512 "name": "spare", 00:21:34.512 "uuid": "8c28f6e7-369c-5e96-a92a-410c156e5bab", 00:21:34.512 "is_configured": true, 00:21:34.512 "data_offset": 256, 00:21:34.512 "data_size": 7936 00:21:34.512 }, 00:21:34.512 { 00:21:34.512 "name": "BaseBdev2", 00:21:34.512 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:34.512 "is_configured": true, 00:21:34.512 "data_offset": 256, 00:21:34.512 "data_size": 7936 00:21:34.512 } 00:21:34.512 ] 00:21:34.512 }' 00:21:34.512 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.512 15:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:34.771 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:34.771 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.771 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:34.771 [2024-12-06 15:48:18.060266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:34.771 [2024-12-06 15:48:18.060429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:34.771 [2024-12-06 15:48:18.060657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:34.771 [2024-12-06 15:48:18.060749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:34.771 [2024-12-06 15:48:18.060766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.032 [2024-12-06 15:48:18.124133] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:35.032 [2024-12-06 15:48:18.124310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.032 [2024-12-06 15:48:18.124374] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:35.032 [2024-12-06 15:48:18.124389] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.032 [2024-12-06 15:48:18.126973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.032 [2024-12-06 15:48:18.127011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:35.032 [2024-12-06 15:48:18.127078] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:35.032 [2024-12-06 15:48:18.127132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:35.032 [2024-12-06 15:48:18.127259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:35.032 spare 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.032 [2024-12-06 15:48:18.227187] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:35.032 [2024-12-06 15:48:18.227325] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:35.032 [2024-12-06 15:48:18.227494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:35.032 [2024-12-06 15:48:18.227698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:35.032 [2024-12-06 15:48:18.227786] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:35.032 [2024-12-06 15:48:18.228008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.032 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.032 "name": "raid_bdev1", 00:21:35.032 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:35.032 "strip_size_kb": 0, 00:21:35.032 "state": "online", 00:21:35.032 "raid_level": "raid1", 00:21:35.032 "superblock": true, 00:21:35.032 "num_base_bdevs": 2, 00:21:35.032 "num_base_bdevs_discovered": 2, 00:21:35.032 "num_base_bdevs_operational": 2, 00:21:35.032 "base_bdevs_list": [ 00:21:35.032 { 00:21:35.032 "name": "spare", 00:21:35.032 "uuid": "8c28f6e7-369c-5e96-a92a-410c156e5bab", 00:21:35.032 "is_configured": true, 00:21:35.032 "data_offset": 256, 00:21:35.032 "data_size": 7936 00:21:35.032 }, 00:21:35.032 { 00:21:35.032 "name": "BaseBdev2", 00:21:35.032 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:35.032 "is_configured": true, 00:21:35.032 "data_offset": 256, 00:21:35.033 "data_size": 7936 00:21:35.033 } 00:21:35.033 ] 00:21:35.033 }' 00:21:35.033 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.033 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:35.605 "name": "raid_bdev1", 00:21:35.605 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:35.605 "strip_size_kb": 0, 00:21:35.605 "state": "online", 00:21:35.605 "raid_level": "raid1", 00:21:35.605 "superblock": true, 00:21:35.605 "num_base_bdevs": 2, 00:21:35.605 "num_base_bdevs_discovered": 2, 00:21:35.605 "num_base_bdevs_operational": 2, 00:21:35.605 "base_bdevs_list": [ 00:21:35.605 { 00:21:35.605 "name": "spare", 00:21:35.605 "uuid": "8c28f6e7-369c-5e96-a92a-410c156e5bab", 00:21:35.605 "is_configured": true, 00:21:35.605 "data_offset": 256, 00:21:35.605 "data_size": 7936 00:21:35.605 }, 00:21:35.605 { 00:21:35.605 "name": "BaseBdev2", 00:21:35.605 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:35.605 "is_configured": true, 00:21:35.605 "data_offset": 256, 00:21:35.605 "data_size": 7936 00:21:35.605 } 00:21:35.605 ] 00:21:35.605 }' 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.605 [2024-12-06 15:48:18.843254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.605 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.606 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.606 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.606 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.606 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.606 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.606 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.606 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.606 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.606 "name": "raid_bdev1", 00:21:35.606 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:35.606 "strip_size_kb": 0, 00:21:35.606 "state": "online", 00:21:35.606 "raid_level": "raid1", 00:21:35.606 "superblock": true, 00:21:35.606 "num_base_bdevs": 2, 00:21:35.606 "num_base_bdevs_discovered": 1, 00:21:35.606 "num_base_bdevs_operational": 1, 00:21:35.606 "base_bdevs_list": [ 00:21:35.606 { 00:21:35.606 "name": null, 00:21:35.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.606 "is_configured": false, 00:21:35.606 "data_offset": 0, 00:21:35.606 "data_size": 7936 00:21:35.606 }, 00:21:35.606 { 00:21:35.606 "name": "BaseBdev2", 00:21:35.606 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:35.606 "is_configured": true, 00:21:35.606 "data_offset": 256, 00:21:35.606 "data_size": 7936 00:21:35.606 } 00:21:35.606 ] 00:21:35.606 }' 00:21:35.606 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.866 15:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.125 15:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:36.125 15:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.125 15:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.125 [2024-12-06 15:48:19.282652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:36.125 [2024-12-06 15:48:19.282898] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:36.125 [2024-12-06 15:48:19.282920] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:36.125 [2024-12-06 15:48:19.282972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:36.125 [2024-12-06 15:48:19.300894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:36.125 15:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.125 15:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:36.125 [2024-12-06 15:48:19.303250] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:37.062 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:37.062 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:37.062 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:37.062 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:37.062 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:37.062 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.062 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.063 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.063 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.063 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.322 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:37.322 "name": "raid_bdev1", 00:21:37.322 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:37.322 "strip_size_kb": 0, 00:21:37.322 "state": "online", 00:21:37.322 "raid_level": "raid1", 00:21:37.322 "superblock": true, 00:21:37.322 "num_base_bdevs": 2, 00:21:37.322 "num_base_bdevs_discovered": 2, 00:21:37.322 "num_base_bdevs_operational": 2, 00:21:37.322 "process": { 00:21:37.322 "type": "rebuild", 00:21:37.322 "target": "spare", 00:21:37.322 "progress": { 00:21:37.322 "blocks": 2560, 00:21:37.322 "percent": 32 00:21:37.322 } 00:21:37.322 }, 00:21:37.322 "base_bdevs_list": [ 00:21:37.322 { 00:21:37.322 "name": "spare", 00:21:37.322 "uuid": "8c28f6e7-369c-5e96-a92a-410c156e5bab", 00:21:37.322 "is_configured": true, 00:21:37.322 "data_offset": 256, 00:21:37.322 "data_size": 7936 00:21:37.322 }, 00:21:37.322 { 00:21:37.322 "name": "BaseBdev2", 00:21:37.322 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:37.322 "is_configured": true, 00:21:37.322 "data_offset": 256, 00:21:37.322 "data_size": 7936 00:21:37.322 } 00:21:37.322 ] 00:21:37.322 }' 00:21:37.322 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:37.322 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:37.322 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:37.322 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:37.322 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:37.322 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.322 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.322 [2024-12-06 15:48:20.426831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:37.323 [2024-12-06 15:48:20.512297] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:37.323 [2024-12-06 15:48:20.512561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.323 [2024-12-06 15:48:20.512663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:37.323 [2024-12-06 15:48:20.512686] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.323 "name": "raid_bdev1", 00:21:37.323 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:37.323 "strip_size_kb": 0, 00:21:37.323 "state": "online", 00:21:37.323 "raid_level": "raid1", 00:21:37.323 "superblock": true, 00:21:37.323 "num_base_bdevs": 2, 00:21:37.323 "num_base_bdevs_discovered": 1, 00:21:37.323 "num_base_bdevs_operational": 1, 00:21:37.323 "base_bdevs_list": [ 00:21:37.323 { 00:21:37.323 "name": null, 00:21:37.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.323 "is_configured": false, 00:21:37.323 "data_offset": 0, 00:21:37.323 "data_size": 7936 00:21:37.323 }, 00:21:37.323 { 00:21:37.323 "name": "BaseBdev2", 00:21:37.323 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:37.323 "is_configured": true, 00:21:37.323 "data_offset": 256, 00:21:37.323 "data_size": 7936 00:21:37.323 } 00:21:37.323 ] 00:21:37.323 }' 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.323 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.892 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:37.892 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.892 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.892 [2024-12-06 15:48:20.954292] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:37.892 [2024-12-06 15:48:20.954492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.893 [2024-12-06 15:48:20.954544] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:37.893 [2024-12-06 15:48:20.954561] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.893 [2024-12-06 15:48:20.954815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.893 [2024-12-06 15:48:20.954835] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:37.893 [2024-12-06 15:48:20.954899] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:37.893 [2024-12-06 15:48:20.954917] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:37.893 [2024-12-06 15:48:20.954929] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:37.893 [2024-12-06 15:48:20.954956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:37.893 spare 00:21:37.893 [2024-12-06 15:48:20.972913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:37.893 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.893 15:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:37.893 [2024-12-06 15:48:20.975380] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:38.829 15:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:38.829 15:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:38.829 15:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:38.829 15:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:38.829 15:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:38.829 15:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.829 15:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.829 15:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.829 15:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.829 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.829 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:38.829 "name": "raid_bdev1", 00:21:38.829 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:38.829 "strip_size_kb": 0, 00:21:38.829 "state": "online", 00:21:38.829 "raid_level": "raid1", 00:21:38.829 "superblock": true, 00:21:38.829 "num_base_bdevs": 2, 00:21:38.829 "num_base_bdevs_discovered": 2, 00:21:38.829 "num_base_bdevs_operational": 2, 00:21:38.829 "process": { 00:21:38.829 "type": "rebuild", 00:21:38.829 "target": "spare", 00:21:38.829 "progress": { 00:21:38.829 "blocks": 2560, 00:21:38.829 "percent": 32 00:21:38.829 } 00:21:38.829 }, 00:21:38.829 "base_bdevs_list": [ 00:21:38.829 { 00:21:38.829 "name": "spare", 00:21:38.829 "uuid": "8c28f6e7-369c-5e96-a92a-410c156e5bab", 00:21:38.829 "is_configured": true, 00:21:38.829 "data_offset": 256, 00:21:38.829 "data_size": 7936 00:21:38.829 }, 00:21:38.829 { 00:21:38.829 "name": "BaseBdev2", 00:21:38.829 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:38.829 "is_configured": true, 00:21:38.829 "data_offset": 256, 00:21:38.829 "data_size": 7936 00:21:38.829 } 00:21:38.829 ] 00:21:38.829 }' 00:21:38.829 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:38.829 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:38.829 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:38.829 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:38.829 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:38.829 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.829 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.829 [2024-12-06 15:48:22.118769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:39.089 [2024-12-06 15:48:22.184208] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:39.089 [2024-12-06 15:48:22.184277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.089 [2024-12-06 15:48:22.184314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:39.089 [2024-12-06 15:48:22.184323] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.089 "name": "raid_bdev1", 00:21:39.089 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:39.089 "strip_size_kb": 0, 00:21:39.089 "state": "online", 00:21:39.089 "raid_level": "raid1", 00:21:39.089 "superblock": true, 00:21:39.089 "num_base_bdevs": 2, 00:21:39.089 "num_base_bdevs_discovered": 1, 00:21:39.089 "num_base_bdevs_operational": 1, 00:21:39.089 "base_bdevs_list": [ 00:21:39.089 { 00:21:39.089 "name": null, 00:21:39.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.089 "is_configured": false, 00:21:39.089 "data_offset": 0, 00:21:39.089 "data_size": 7936 00:21:39.089 }, 00:21:39.089 { 00:21:39.089 "name": "BaseBdev2", 00:21:39.089 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:39.089 "is_configured": true, 00:21:39.089 "data_offset": 256, 00:21:39.089 "data_size": 7936 00:21:39.089 } 00:21:39.089 ] 00:21:39.089 }' 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.089 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.349 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:39.349 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:39.349 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:39.349 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:39.349 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:39.349 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.349 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.349 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.349 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.349 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.349 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:39.349 "name": "raid_bdev1", 00:21:39.349 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:39.349 "strip_size_kb": 0, 00:21:39.349 "state": "online", 00:21:39.349 "raid_level": "raid1", 00:21:39.349 "superblock": true, 00:21:39.349 "num_base_bdevs": 2, 00:21:39.349 "num_base_bdevs_discovered": 1, 00:21:39.349 "num_base_bdevs_operational": 1, 00:21:39.349 "base_bdevs_list": [ 00:21:39.349 { 00:21:39.349 "name": null, 00:21:39.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.349 "is_configured": false, 00:21:39.349 "data_offset": 0, 00:21:39.349 "data_size": 7936 00:21:39.349 }, 00:21:39.349 { 00:21:39.349 "name": "BaseBdev2", 00:21:39.349 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:39.349 "is_configured": true, 00:21:39.349 "data_offset": 256, 00:21:39.349 "data_size": 7936 00:21:39.349 } 00:21:39.349 ] 00:21:39.349 }' 00:21:39.349 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:39.349 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:39.349 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:39.608 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:39.608 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:39.608 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.608 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.608 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.608 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:39.608 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.608 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.608 [2024-12-06 15:48:22.692981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:39.608 [2024-12-06 15:48:22.693173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.608 [2024-12-06 15:48:22.693238] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:39.608 [2024-12-06 15:48:22.693331] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.608 [2024-12-06 15:48:22.693593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.608 [2024-12-06 15:48:22.693611] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:39.608 [2024-12-06 15:48:22.693674] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:39.608 [2024-12-06 15:48:22.693699] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:39.608 [2024-12-06 15:48:22.693713] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:39.609 [2024-12-06 15:48:22.693726] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:39.609 BaseBdev1 00:21:39.609 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.609 15:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.547 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.547 "name": "raid_bdev1", 00:21:40.548 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:40.548 "strip_size_kb": 0, 00:21:40.548 "state": "online", 00:21:40.548 "raid_level": "raid1", 00:21:40.548 "superblock": true, 00:21:40.548 "num_base_bdevs": 2, 00:21:40.548 "num_base_bdevs_discovered": 1, 00:21:40.548 "num_base_bdevs_operational": 1, 00:21:40.548 "base_bdevs_list": [ 00:21:40.548 { 00:21:40.548 "name": null, 00:21:40.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.548 "is_configured": false, 00:21:40.548 "data_offset": 0, 00:21:40.548 "data_size": 7936 00:21:40.548 }, 00:21:40.548 { 00:21:40.548 "name": "BaseBdev2", 00:21:40.548 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:40.548 "is_configured": true, 00:21:40.548 "data_offset": 256, 00:21:40.548 "data_size": 7936 00:21:40.548 } 00:21:40.548 ] 00:21:40.548 }' 00:21:40.548 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.548 15:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:41.117 "name": "raid_bdev1", 00:21:41.117 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:41.117 "strip_size_kb": 0, 00:21:41.117 "state": "online", 00:21:41.117 "raid_level": "raid1", 00:21:41.117 "superblock": true, 00:21:41.117 "num_base_bdevs": 2, 00:21:41.117 "num_base_bdevs_discovered": 1, 00:21:41.117 "num_base_bdevs_operational": 1, 00:21:41.117 "base_bdevs_list": [ 00:21:41.117 { 00:21:41.117 "name": null, 00:21:41.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.117 "is_configured": false, 00:21:41.117 "data_offset": 0, 00:21:41.117 "data_size": 7936 00:21:41.117 }, 00:21:41.117 { 00:21:41.117 "name": "BaseBdev2", 00:21:41.117 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:41.117 "is_configured": true, 00:21:41.117 "data_offset": 256, 00:21:41.117 "data_size": 7936 00:21:41.117 } 00:21:41.117 ] 00:21:41.117 }' 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:41.117 [2024-12-06 15:48:24.249813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:41.117 [2024-12-06 15:48:24.250367] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:41.117 [2024-12-06 15:48:24.250522] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:41.117 request: 00:21:41.117 { 00:21:41.117 "base_bdev": "BaseBdev1", 00:21:41.117 "raid_bdev": "raid_bdev1", 00:21:41.117 "method": "bdev_raid_add_base_bdev", 00:21:41.117 "req_id": 1 00:21:41.117 } 00:21:41.117 Got JSON-RPC error response 00:21:41.117 response: 00:21:41.117 { 00:21:41.117 "code": -22, 00:21:41.117 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:41.117 } 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:41.117 15:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.060 "name": "raid_bdev1", 00:21:42.060 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:42.060 "strip_size_kb": 0, 00:21:42.060 "state": "online", 00:21:42.060 "raid_level": "raid1", 00:21:42.060 "superblock": true, 00:21:42.060 "num_base_bdevs": 2, 00:21:42.060 "num_base_bdevs_discovered": 1, 00:21:42.060 "num_base_bdevs_operational": 1, 00:21:42.060 "base_bdevs_list": [ 00:21:42.060 { 00:21:42.060 "name": null, 00:21:42.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.060 "is_configured": false, 00:21:42.060 "data_offset": 0, 00:21:42.060 "data_size": 7936 00:21:42.060 }, 00:21:42.060 { 00:21:42.060 "name": "BaseBdev2", 00:21:42.060 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:42.060 "is_configured": true, 00:21:42.060 "data_offset": 256, 00:21:42.060 "data_size": 7936 00:21:42.060 } 00:21:42.060 ] 00:21:42.060 }' 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.060 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:42.627 "name": "raid_bdev1", 00:21:42.627 "uuid": "426d0f54-74b8-49ac-9106-421fa1a401cd", 00:21:42.627 "strip_size_kb": 0, 00:21:42.627 "state": "online", 00:21:42.627 "raid_level": "raid1", 00:21:42.627 "superblock": true, 00:21:42.627 "num_base_bdevs": 2, 00:21:42.627 "num_base_bdevs_discovered": 1, 00:21:42.627 "num_base_bdevs_operational": 1, 00:21:42.627 "base_bdevs_list": [ 00:21:42.627 { 00:21:42.627 "name": null, 00:21:42.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.627 "is_configured": false, 00:21:42.627 "data_offset": 0, 00:21:42.627 "data_size": 7936 00:21:42.627 }, 00:21:42.627 { 00:21:42.627 "name": "BaseBdev2", 00:21:42.627 "uuid": "eeb08e04-d0bd-5a38-a0d4-d4c53858f4fc", 00:21:42.627 "is_configured": true, 00:21:42.627 "data_offset": 256, 00:21:42.627 "data_size": 7936 00:21:42.627 } 00:21:42.627 ] 00:21:42.627 }' 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89021 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89021 ']' 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89021 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89021 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:42.627 killing process with pid 89021 00:21:42.627 Received shutdown signal, test time was about 60.000000 seconds 00:21:42.627 00:21:42.627 Latency(us) 00:21:42.627 [2024-12-06T15:48:25.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.627 [2024-12-06T15:48:25.922Z] =================================================================================================================== 00:21:42.627 [2024-12-06T15:48:25.922Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89021' 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89021 00:21:42.627 [2024-12-06 15:48:25.835136] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:42.627 15:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89021 00:21:42.627 [2024-12-06 15:48:25.835348] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:42.627 [2024-12-06 15:48:25.835417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:42.628 [2024-12-06 15:48:25.835434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:42.886 [2024-12-06 15:48:26.153114] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:44.263 15:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:21:44.263 00:21:44.263 real 0m17.194s 00:21:44.263 user 0m22.070s 00:21:44.263 sys 0m1.874s 00:21:44.263 15:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.263 15:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:44.263 ************************************ 00:21:44.264 END TEST raid_rebuild_test_sb_md_interleaved 00:21:44.264 ************************************ 00:21:44.264 15:48:27 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:21:44.264 15:48:27 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:21:44.264 15:48:27 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89021 ']' 00:21:44.264 15:48:27 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89021 00:21:44.264 15:48:27 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:21:44.264 00:21:44.264 real 12m4.860s 00:21:44.264 user 15m53.694s 00:21:44.264 sys 2m16.862s 00:21:44.264 15:48:27 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.264 ************************************ 00:21:44.264 END TEST bdev_raid 00:21:44.264 ************************************ 00:21:44.264 15:48:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:44.264 15:48:27 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:44.264 15:48:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:44.264 15:48:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.264 15:48:27 -- common/autotest_common.sh@10 -- # set +x 00:21:44.264 ************************************ 00:21:44.264 START TEST spdkcli_raid 00:21:44.264 ************************************ 00:21:44.264 15:48:27 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:44.524 * Looking for test storage... 00:21:44.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:44.524 15:48:27 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:44.524 15:48:27 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:21:44.524 15:48:27 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:44.524 15:48:27 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:21:44.524 15:48:27 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:21:44.525 15:48:27 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.525 15:48:27 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:21:44.525 15:48:27 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.525 15:48:27 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:21:44.525 15:48:27 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:21:44.525 15:48:27 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.525 15:48:27 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:21:44.525 15:48:27 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.525 15:48:27 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.525 15:48:27 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.525 15:48:27 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:21:44.525 15:48:27 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.525 15:48:27 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:44.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.525 --rc genhtml_branch_coverage=1 00:21:44.525 --rc genhtml_function_coverage=1 00:21:44.525 --rc genhtml_legend=1 00:21:44.525 --rc geninfo_all_blocks=1 00:21:44.525 --rc geninfo_unexecuted_blocks=1 00:21:44.525 00:21:44.525 ' 00:21:44.525 15:48:27 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:44.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.525 --rc genhtml_branch_coverage=1 00:21:44.525 --rc genhtml_function_coverage=1 00:21:44.525 --rc genhtml_legend=1 00:21:44.525 --rc geninfo_all_blocks=1 00:21:44.525 --rc geninfo_unexecuted_blocks=1 00:21:44.525 00:21:44.525 ' 00:21:44.525 15:48:27 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:44.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.525 --rc genhtml_branch_coverage=1 00:21:44.525 --rc genhtml_function_coverage=1 00:21:44.525 --rc genhtml_legend=1 00:21:44.525 --rc geninfo_all_blocks=1 00:21:44.525 --rc geninfo_unexecuted_blocks=1 00:21:44.525 00:21:44.525 ' 00:21:44.525 15:48:27 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:44.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.525 --rc genhtml_branch_coverage=1 00:21:44.525 --rc genhtml_function_coverage=1 00:21:44.525 --rc genhtml_legend=1 00:21:44.525 --rc geninfo_all_blocks=1 00:21:44.525 --rc geninfo_unexecuted_blocks=1 00:21:44.525 00:21:44.525 ' 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:44.525 15:48:27 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:21:44.525 15:48:27 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:44.525 15:48:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89692 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:21:44.525 15:48:27 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89692 00:21:44.525 15:48:27 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89692 ']' 00:21:44.525 15:48:27 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.525 15:48:27 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.525 15:48:27 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.525 15:48:27 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.525 15:48:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:44.785 [2024-12-06 15:48:27.888659] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:21:44.785 [2024-12-06 15:48:27.888947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89692 ] 00:21:45.044 [2024-12-06 15:48:28.078148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:45.044 [2024-12-06 15:48:28.213194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.044 [2024-12-06 15:48:28.213230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.982 15:48:29 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.982 15:48:29 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:21:45.982 15:48:29 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:21:45.982 15:48:29 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:45.982 15:48:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:46.242 15:48:29 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:21:46.242 15:48:29 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.242 15:48:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:46.242 15:48:29 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:46.242 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:46.242 ' 00:21:47.620 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:21:47.620 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:21:47.880 15:48:30 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:21:47.880 15:48:30 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.880 15:48:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:47.880 15:48:30 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:21:47.880 15:48:30 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.880 15:48:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:47.880 15:48:31 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:21:47.880 ' 00:21:48.817 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:21:49.075 15:48:32 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:21:49.075 15:48:32 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.075 15:48:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:49.075 15:48:32 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:21:49.075 15:48:32 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.075 15:48:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:49.075 15:48:32 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:21:49.075 15:48:32 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:21:49.641 15:48:32 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:21:49.641 15:48:32 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:21:49.641 15:48:32 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:21:49.641 15:48:32 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.641 15:48:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:49.641 15:48:32 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:21:49.641 15:48:32 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.641 15:48:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:49.641 15:48:32 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:21:49.641 ' 00:21:50.577 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:21:50.835 15:48:33 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:21:50.835 15:48:33 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.835 15:48:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:50.835 15:48:33 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:21:50.835 15:48:33 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.835 15:48:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:50.835 15:48:33 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:21:50.835 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:21:50.835 ' 00:21:52.213 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:21:52.213 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:21:52.213 15:48:35 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:21:52.213 15:48:35 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.213 15:48:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:52.213 15:48:35 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89692 00:21:52.213 15:48:35 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89692 ']' 00:21:52.213 15:48:35 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89692 00:21:52.213 15:48:35 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:21:52.471 15:48:35 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.471 15:48:35 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89692 00:21:52.471 killing process with pid 89692 00:21:52.471 15:48:35 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.471 15:48:35 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.471 15:48:35 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89692' 00:21:52.471 15:48:35 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89692 00:21:52.471 15:48:35 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89692 00:21:55.007 15:48:38 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:21:55.007 15:48:38 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89692 ']' 00:21:55.007 15:48:38 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89692 00:21:55.007 15:48:38 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89692 ']' 00:21:55.007 15:48:38 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89692 00:21:55.007 Process with pid 89692 is not found 00:21:55.007 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89692) - No such process 00:21:55.007 15:48:38 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89692 is not found' 00:21:55.007 15:48:38 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:21:55.007 15:48:38 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:55.007 15:48:38 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:55.007 15:48:38 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:55.007 ************************************ 00:21:55.007 END TEST spdkcli_raid 00:21:55.007 ************************************ 00:21:55.007 00:21:55.007 real 0m10.585s 00:21:55.007 user 0m21.423s 00:21:55.007 sys 0m1.402s 00:21:55.007 15:48:38 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:55.007 15:48:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:55.007 15:48:38 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:55.007 15:48:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:55.007 15:48:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:55.007 15:48:38 -- common/autotest_common.sh@10 -- # set +x 00:21:55.007 ************************************ 00:21:55.007 START TEST blockdev_raid5f 00:21:55.007 ************************************ 00:21:55.007 15:48:38 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:55.007 * Looking for test storage... 00:21:55.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:55.267 15:48:38 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:55.267 15:48:38 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:21:55.267 15:48:38 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:55.267 15:48:38 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:55.267 15:48:38 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:21:55.267 15:48:38 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:55.267 15:48:38 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:55.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.267 --rc genhtml_branch_coverage=1 00:21:55.268 --rc genhtml_function_coverage=1 00:21:55.268 --rc genhtml_legend=1 00:21:55.268 --rc geninfo_all_blocks=1 00:21:55.268 --rc geninfo_unexecuted_blocks=1 00:21:55.268 00:21:55.268 ' 00:21:55.268 15:48:38 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:55.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.268 --rc genhtml_branch_coverage=1 00:21:55.268 --rc genhtml_function_coverage=1 00:21:55.268 --rc genhtml_legend=1 00:21:55.268 --rc geninfo_all_blocks=1 00:21:55.268 --rc geninfo_unexecuted_blocks=1 00:21:55.268 00:21:55.268 ' 00:21:55.268 15:48:38 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:55.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.268 --rc genhtml_branch_coverage=1 00:21:55.268 --rc genhtml_function_coverage=1 00:21:55.268 --rc genhtml_legend=1 00:21:55.268 --rc geninfo_all_blocks=1 00:21:55.268 --rc geninfo_unexecuted_blocks=1 00:21:55.268 00:21:55.268 ' 00:21:55.268 15:48:38 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:55.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.268 --rc genhtml_branch_coverage=1 00:21:55.268 --rc genhtml_function_coverage=1 00:21:55.268 --rc genhtml_legend=1 00:21:55.268 --rc geninfo_all_blocks=1 00:21:55.268 --rc geninfo_unexecuted_blocks=1 00:21:55.268 00:21:55.268 ' 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89987 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:55.268 15:48:38 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89987 00:21:55.268 15:48:38 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89987 ']' 00:21:55.268 15:48:38 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.268 15:48:38 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.268 15:48:38 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.268 15:48:38 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.268 15:48:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:55.268 [2024-12-06 15:48:38.546676] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:21:55.268 [2024-12-06 15:48:38.547100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89987 ] 00:21:55.527 [2024-12-06 15:48:38.731638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.788 [2024-12-06 15:48:38.859364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.727 15:48:39 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.727 15:48:39 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:21:56.727 15:48:39 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:21:56.727 15:48:39 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:21:56.727 15:48:39 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:21:56.727 15:48:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.727 15:48:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:56.727 Malloc0 00:21:56.727 Malloc1 00:21:56.727 Malloc2 00:21:56.727 15:48:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.727 15:48:39 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:21:56.727 15:48:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.727 15:48:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:56.727 15:48:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.727 15:48:39 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:21:56.727 15:48:39 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:21:56.727 15:48:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.727 15:48:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:56.727 15:48:40 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.727 15:48:40 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:21:56.727 15:48:40 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.727 15:48:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:56.986 15:48:40 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.986 15:48:40 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:56.986 15:48:40 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.986 15:48:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:56.986 15:48:40 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.986 15:48:40 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:21:56.986 15:48:40 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:21:56.987 15:48:40 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.987 15:48:40 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:21:56.987 15:48:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:56.987 15:48:40 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.987 15:48:40 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:21:56.987 15:48:40 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:21:56.987 15:48:40 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "92876b79-02f4-4587-b516-b1e1cf14fb06"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "92876b79-02f4-4587-b516-b1e1cf14fb06",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "92876b79-02f4-4587-b516-b1e1cf14fb06",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "e6543922-3322-4738-997b-1ee9ad8c7cd0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "bcf0f1e7-99cb-40d6-86aa-c2fe3aa49094",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "4eb406c7-1596-4262-a44c-6bdb2b78873f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:56.987 15:48:40 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:21:56.987 15:48:40 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:21:56.987 15:48:40 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:21:56.987 15:48:40 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 89987 00:21:56.987 15:48:40 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89987 ']' 00:21:56.987 15:48:40 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89987 00:21:56.987 15:48:40 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:21:56.987 15:48:40 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.987 15:48:40 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89987 00:21:56.987 15:48:40 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.987 15:48:40 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.987 15:48:40 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89987' 00:21:56.987 killing process with pid 89987 00:21:56.987 15:48:40 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89987 00:21:56.987 15:48:40 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89987 00:22:00.282 15:48:43 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:00.283 15:48:43 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:00.283 15:48:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:00.283 15:48:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:00.283 15:48:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:00.283 ************************************ 00:22:00.283 START TEST bdev_hello_world 00:22:00.283 ************************************ 00:22:00.283 15:48:43 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:00.283 [2024-12-06 15:48:43.146014] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:22:00.283 [2024-12-06 15:48:43.146151] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90054 ] 00:22:00.283 [2024-12-06 15:48:43.331190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.283 [2024-12-06 15:48:43.459267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.851 [2024-12-06 15:48:44.079534] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:00.852 [2024-12-06 15:48:44.079592] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:22:00.852 [2024-12-06 15:48:44.079612] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:00.852 [2024-12-06 15:48:44.080138] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:00.852 [2024-12-06 15:48:44.080333] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:00.852 [2024-12-06 15:48:44.080354] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:00.852 [2024-12-06 15:48:44.080407] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:00.852 00:22:00.852 [2024-12-06 15:48:44.080428] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:22:02.770 00:22:02.770 real 0m2.515s 00:22:02.770 user 0m2.000s 00:22:02.770 sys 0m0.392s 00:22:02.770 ************************************ 00:22:02.770 END TEST bdev_hello_world 00:22:02.770 ************************************ 00:22:02.770 15:48:45 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:02.770 15:48:45 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:02.770 15:48:45 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:22:02.770 15:48:45 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:02.770 15:48:45 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.770 15:48:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:02.770 ************************************ 00:22:02.770 START TEST bdev_bounds 00:22:02.770 ************************************ 00:22:02.770 15:48:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:22:02.770 15:48:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90102 00:22:02.770 15:48:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:02.770 15:48:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:22:02.770 Process bdevio pid: 90102 00:22:02.770 15:48:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90102' 00:22:02.770 15:48:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90102 00:22:02.770 15:48:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90102 ']' 00:22:02.770 15:48:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.770 15:48:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.770 15:48:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.770 15:48:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.770 15:48:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:02.770 [2024-12-06 15:48:45.741481] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:22:02.770 [2024-12-06 15:48:45.741655] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90102 ] 00:22:02.770 [2024-12-06 15:48:45.926077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:03.028 [2024-12-06 15:48:46.063884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.028 [2024-12-06 15:48:46.063977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.028 [2024-12-06 15:48:46.063984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.596 15:48:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.596 15:48:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:22:03.596 15:48:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:22:03.596 I/O targets: 00:22:03.596 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:22:03.596 00:22:03.596 00:22:03.596 CUnit - A unit testing framework for C - Version 2.1-3 00:22:03.596 http://cunit.sourceforge.net/ 00:22:03.596 00:22:03.596 00:22:03.596 Suite: bdevio tests on: raid5f 00:22:03.596 Test: blockdev write read block ...passed 00:22:03.596 Test: blockdev write zeroes read block ...passed 00:22:03.596 Test: blockdev write zeroes read no split ...passed 00:22:03.854 Test: blockdev write zeroes read split ...passed 00:22:03.854 Test: blockdev write zeroes read split partial ...passed 00:22:03.854 Test: blockdev reset ...passed 00:22:03.854 Test: blockdev write read 8 blocks ...passed 00:22:03.854 Test: blockdev write read size > 128k ...passed 00:22:03.854 Test: blockdev write read invalid size ...passed 00:22:03.854 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:03.854 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:03.854 Test: blockdev write read max offset ...passed 00:22:03.854 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:03.854 Test: blockdev writev readv 8 blocks ...passed 00:22:03.854 Test: blockdev writev readv 30 x 1block ...passed 00:22:03.854 Test: blockdev writev readv block ...passed 00:22:03.854 Test: blockdev writev readv size > 128k ...passed 00:22:03.854 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:03.854 Test: blockdev comparev and writev ...passed 00:22:03.854 Test: blockdev nvme passthru rw ...passed 00:22:03.854 Test: blockdev nvme passthru vendor specific ...passed 00:22:03.854 Test: blockdev nvme admin passthru ...passed 00:22:03.854 Test: blockdev copy ...passed 00:22:03.854 00:22:03.854 Run Summary: Type Total Ran Passed Failed Inactive 00:22:03.854 suites 1 1 n/a 0 0 00:22:03.854 tests 23 23 23 0 0 00:22:03.854 asserts 130 130 130 0 n/a 00:22:03.854 00:22:03.854 Elapsed time = 0.582 seconds 00:22:03.854 0 00:22:03.854 15:48:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90102 00:22:03.854 15:48:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90102 ']' 00:22:03.854 15:48:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90102 00:22:03.854 15:48:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:22:03.854 15:48:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.854 15:48:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90102 00:22:03.854 killing process with pid 90102 00:22:03.854 15:48:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:03.854 15:48:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:03.854 15:48:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90102' 00:22:03.854 15:48:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90102 00:22:03.854 15:48:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90102 00:22:05.759 ************************************ 00:22:05.759 END TEST bdev_bounds 00:22:05.759 ************************************ 00:22:05.759 15:48:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:22:05.759 00:22:05.759 real 0m3.008s 00:22:05.759 user 0m7.360s 00:22:05.759 sys 0m0.503s 00:22:05.759 15:48:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:05.759 15:48:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:05.759 15:48:48 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:05.759 15:48:48 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:05.759 15:48:48 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:05.759 15:48:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:05.759 ************************************ 00:22:05.759 START TEST bdev_nbd 00:22:05.759 ************************************ 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90168 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90168 /var/tmp/spdk-nbd.sock 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90168 ']' 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:05.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.759 15:48:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:05.759 [2024-12-06 15:48:48.827220] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:22:05.759 [2024-12-06 15:48:48.827377] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.759 [2024-12-06 15:48:49.013278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.018 [2024-12-06 15:48:49.142829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.588 15:48:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.588 15:48:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:22:06.588 15:48:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:22:06.588 15:48:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:06.588 15:48:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:22:06.588 15:48:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:22:06.588 15:48:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:22:06.588 15:48:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:06.588 15:48:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:22:06.588 15:48:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:22:06.588 15:48:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:22:06.588 15:48:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:22:06.588 15:48:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:22:06.588 15:48:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:06.588 15:48:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:06.848 1+0 records in 00:22:06.848 1+0 records out 00:22:06.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503101 s, 8.1 MB/s 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:06.848 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:07.107 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:22:07.107 { 00:22:07.107 "nbd_device": "/dev/nbd0", 00:22:07.107 "bdev_name": "raid5f" 00:22:07.107 } 00:22:07.107 ]' 00:22:07.107 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:22:07.107 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:22:07.107 { 00:22:07.107 "nbd_device": "/dev/nbd0", 00:22:07.107 "bdev_name": "raid5f" 00:22:07.107 } 00:22:07.107 ]' 00:22:07.107 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:22:07.107 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:07.107 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:07.107 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:07.107 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:07.107 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:07.107 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:07.107 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:07.367 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:07.367 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:07.367 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:07.367 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:07.367 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:07.367 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:07.367 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:07.367 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:07.367 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:07.367 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:07.367 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:07.626 15:48:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:22:07.886 /dev/nbd0 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:07.886 1+0 records in 00:22:07.886 1+0 records out 00:22:07.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385622 s, 10.6 MB/s 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:07.886 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:08.145 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:08.145 { 00:22:08.145 "nbd_device": "/dev/nbd0", 00:22:08.145 "bdev_name": "raid5f" 00:22:08.145 } 00:22:08.145 ]' 00:22:08.145 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:08.145 { 00:22:08.145 "nbd_device": "/dev/nbd0", 00:22:08.145 "bdev_name": "raid5f" 00:22:08.145 } 00:22:08.146 ]' 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:22:08.146 256+0 records in 00:22:08.146 256+0 records out 00:22:08.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00535989 s, 196 MB/s 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:08.146 256+0 records in 00:22:08.146 256+0 records out 00:22:08.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0328951 s, 31.9 MB/s 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:08.146 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:08.405 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:08.405 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:08.405 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:08.405 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:08.405 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:08.405 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:08.405 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:08.405 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:08.405 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:08.405 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:08.405 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:22:08.664 15:48:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:22:08.923 malloc_lvol_verify 00:22:08.923 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:22:09.182 6fed32ca-8ffe-4901-b7f6-9cb9234af0f5 00:22:09.182 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:22:09.441 c45410ff-157f-4243-bc68-a16cc93dc4a3 00:22:09.441 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:22:09.701 /dev/nbd0 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:22:09.701 mke2fs 1.47.0 (5-Feb-2023) 00:22:09.701 Discarding device blocks: 0/4096 done 00:22:09.701 Creating filesystem with 4096 1k blocks and 1024 inodes 00:22:09.701 00:22:09.701 Allocating group tables: 0/1 done 00:22:09.701 Writing inode tables: 0/1 done 00:22:09.701 Creating journal (1024 blocks): done 00:22:09.701 Writing superblocks and filesystem accounting information: 0/1 done 00:22:09.701 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:09.701 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:09.702 15:48:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90168 00:22:09.702 15:48:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90168 ']' 00:22:09.702 15:48:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90168 00:22:09.702 15:48:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:22:09.702 15:48:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.702 15:48:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90168 00:22:09.961 15:48:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.961 15:48:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.961 killing process with pid 90168 00:22:09.961 15:48:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90168' 00:22:09.961 15:48:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90168 00:22:09.961 15:48:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90168 00:22:11.340 15:48:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:22:11.340 00:22:11.340 real 0m5.859s 00:22:11.340 user 0m7.514s 00:22:11.340 sys 0m1.611s 00:22:11.340 15:48:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.340 15:48:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:11.340 ************************************ 00:22:11.340 END TEST bdev_nbd 00:22:11.340 ************************************ 00:22:11.600 15:48:54 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:22:11.600 15:48:54 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:22:11.600 15:48:54 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:22:11.600 15:48:54 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:22:11.600 15:48:54 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:11.600 15:48:54 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.600 15:48:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:11.600 ************************************ 00:22:11.600 START TEST bdev_fio 00:22:11.600 ************************************ 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:22:11.600 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:11.600 ************************************ 00:22:11.600 START TEST bdev_fio_rw_verify 00:22:11.600 ************************************ 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:11.600 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:11.601 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:11.601 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:11.601 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:11.601 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:22:11.601 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:11.601 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:11.601 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:11.601 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:22:11.601 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:11.601 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:11.601 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:11.601 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:22:11.601 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:11.601 15:48:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:11.881 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:11.881 fio-3.35 00:22:11.881 Starting 1 thread 00:22:24.092 00:22:24.092 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90372: Fri Dec 6 15:49:06 2024 00:22:24.092 read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(450MiB/10001msec) 00:22:24.092 slat (usec): min=19, max=112, avg=21.14, stdev= 2.01 00:22:24.092 clat (usec): min=10, max=322, avg=139.19, stdev=49.67 00:22:24.092 lat (usec): min=31, max=344, avg=160.33, stdev=49.82 00:22:24.092 clat percentiles (usec): 00:22:24.092 | 50.000th=[ 143], 99.000th=[ 227], 99.900th=[ 247], 99.990th=[ 285], 00:22:24.092 | 99.999th=[ 310] 00:22:24.092 write: IOPS=12.1k, BW=47.2MiB/s (49.5MB/s)(466MiB/9864msec); 0 zone resets 00:22:24.092 slat (usec): min=7, max=355, avg=17.35, stdev= 3.45 00:22:24.092 clat (usec): min=61, max=1259, avg=317.02, stdev=40.60 00:22:24.092 lat (usec): min=78, max=1461, avg=334.36, stdev=41.34 00:22:24.092 clat percentiles (usec): 00:22:24.092 | 50.000th=[ 322], 99.000th=[ 388], 99.900th=[ 562], 99.990th=[ 1074], 00:22:24.092 | 99.999th=[ 1205] 00:22:24.092 bw ( KiB/s): min=44944, max=50696, per=98.75%, avg=47759.58, stdev=1661.90, samples=19 00:22:24.092 iops : min=11236, max=12674, avg=11939.79, stdev=415.61, samples=19 00:22:24.092 lat (usec) : 20=0.01%, 50=0.01%, 100=12.18%, 250=39.98%, 500=47.76% 00:22:24.092 lat (usec) : 750=0.05%, 1000=0.01% 00:22:24.092 lat (msec) : 2=0.01% 00:22:24.092 cpu : usr=98.86%, sys=0.46%, ctx=21, majf=0, minf=9524 00:22:24.092 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:24.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:24.092 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:24.092 issued rwts: total=115211,119263,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:24.092 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:24.092 00:22:24.092 Run status group 0 (all jobs): 00:22:24.092 READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=450MiB (472MB), run=10001-10001msec 00:22:24.092 WRITE: bw=47.2MiB/s (49.5MB/s), 47.2MiB/s-47.2MiB/s (49.5MB/s-49.5MB/s), io=466MiB (489MB), run=9864-9864msec 00:22:24.677 ----------------------------------------------------- 00:22:24.677 Suppressions used: 00:22:24.677 count bytes template 00:22:24.677 1 7 /usr/src/fio/parse.c 00:22:24.677 490 47040 /usr/src/fio/iolog.c 00:22:24.677 1 8 libtcmalloc_minimal.so 00:22:24.677 1 904 libcrypto.so 00:22:24.677 ----------------------------------------------------- 00:22:24.677 00:22:24.677 00:22:24.677 real 0m13.034s 00:22:24.677 user 0m13.264s 00:22:24.677 sys 0m0.877s 00:22:24.677 15:49:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.677 15:49:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:22:24.677 ************************************ 00:22:24.677 END TEST bdev_fio_rw_verify 00:22:24.677 ************************************ 00:22:24.677 15:49:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:22:24.677 15:49:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:24.677 15:49:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:22:24.677 15:49:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:24.677 15:49:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:22:24.677 15:49:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:22:24.677 15:49:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:22:24.677 15:49:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:22:24.677 15:49:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:24.677 15:49:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:22:24.678 15:49:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:22:24.678 15:49:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:24.678 15:49:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:22:24.678 15:49:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:22:24.678 15:49:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:22:24.678 15:49:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:22:24.678 15:49:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "92876b79-02f4-4587-b516-b1e1cf14fb06"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "92876b79-02f4-4587-b516-b1e1cf14fb06",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "92876b79-02f4-4587-b516-b1e1cf14fb06",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "e6543922-3322-4738-997b-1ee9ad8c7cd0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "bcf0f1e7-99cb-40d6-86aa-c2fe3aa49094",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "4eb406c7-1596-4262-a44c-6bdb2b78873f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:24.678 15:49:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:24.951 15:49:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:22:24.951 15:49:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:24.951 /home/vagrant/spdk_repo/spdk 00:22:24.951 15:49:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:22:24.951 15:49:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:22:24.951 15:49:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:22:24.951 00:22:24.951 real 0m13.320s 00:22:24.951 user 0m13.385s 00:22:24.951 sys 0m1.017s 00:22:24.951 15:49:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.951 15:49:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:24.951 ************************************ 00:22:24.951 END TEST bdev_fio 00:22:24.951 ************************************ 00:22:24.951 15:49:08 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:24.951 15:49:08 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:24.951 15:49:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:22:24.951 15:49:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.951 15:49:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:24.951 ************************************ 00:22:24.951 START TEST bdev_verify 00:22:24.951 ************************************ 00:22:24.951 15:49:08 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:24.951 [2024-12-06 15:49:08.151339] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:22:24.951 [2024-12-06 15:49:08.151454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90534 ] 00:22:25.218 [2024-12-06 15:49:08.334294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:25.218 [2024-12-06 15:49:08.468913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.218 [2024-12-06 15:49:08.468947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.784 Running I/O for 5 seconds... 00:22:28.095 15695.00 IOPS, 61.31 MiB/s [2024-12-06T15:49:12.325Z] 15927.00 IOPS, 62.21 MiB/s [2024-12-06T15:49:13.262Z] 15097.67 IOPS, 58.98 MiB/s [2024-12-06T15:49:14.198Z] 15562.75 IOPS, 60.79 MiB/s [2024-12-06T15:49:14.198Z] 15683.60 IOPS, 61.26 MiB/s 00:22:30.903 Latency(us) 00:22:30.903 [2024-12-06T15:49:14.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.903 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:30.903 Verification LBA range: start 0x0 length 0x2000 00:22:30.903 raid5f : 5.02 7862.53 30.71 0.00 0.00 24462.10 209.73 21792.69 00:22:30.903 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:30.903 Verification LBA range: start 0x2000 length 0x2000 00:22:30.903 raid5f : 5.02 7820.10 30.55 0.00 0.00 24642.61 345.45 22003.25 00:22:30.903 [2024-12-06T15:49:14.198Z] =================================================================================================================== 00:22:30.903 [2024-12-06T15:49:14.198Z] Total : 15682.63 61.26 0.00 0.00 24552.09 209.73 22003.25 00:22:32.806 00:22:32.806 real 0m7.557s 00:22:32.806 user 0m13.820s 00:22:32.806 sys 0m0.396s 00:22:32.806 15:49:15 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.806 15:49:15 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:22:32.806 ************************************ 00:22:32.806 END TEST bdev_verify 00:22:32.806 ************************************ 00:22:32.806 15:49:15 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:32.806 15:49:15 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:22:32.806 15:49:15 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.806 15:49:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:32.806 ************************************ 00:22:32.806 START TEST bdev_verify_big_io 00:22:32.806 ************************************ 00:22:32.807 15:49:15 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:32.807 [2024-12-06 15:49:15.778072] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:22:32.807 [2024-12-06 15:49:15.778200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90635 ] 00:22:32.807 [2024-12-06 15:49:15.960601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:32.807 [2024-12-06 15:49:16.091762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.807 [2024-12-06 15:49:16.091795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.745 Running I/O for 5 seconds... 00:22:35.618 758.00 IOPS, 47.38 MiB/s [2024-12-06T15:49:19.852Z] 761.00 IOPS, 47.56 MiB/s [2024-12-06T15:49:21.228Z] 844.67 IOPS, 52.79 MiB/s [2024-12-06T15:49:22.166Z] 825.00 IOPS, 51.56 MiB/s [2024-12-06T15:49:22.166Z] 863.20 IOPS, 53.95 MiB/s 00:22:38.871 Latency(us) 00:22:38.871 [2024-12-06T15:49:22.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.871 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:38.871 Verification LBA range: start 0x0 length 0x200 00:22:38.871 raid5f : 5.28 432.89 27.06 0.00 0.00 7362215.87 274.71 309940.54 00:22:38.871 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:38.871 Verification LBA range: start 0x200 length 0x200 00:22:38.871 raid5f : 5.29 431.87 26.99 0.00 0.00 7415616.05 148.87 311625.00 00:22:38.871 [2024-12-06T15:49:22.166Z] =================================================================================================================== 00:22:38.871 [2024-12-06T15:49:22.166Z] Total : 864.76 54.05 0.00 0.00 7388915.96 148.87 311625.00 00:22:40.248 00:22:40.248 real 0m7.843s 00:22:40.248 user 0m14.435s 00:22:40.248 sys 0m0.376s 00:22:40.248 15:49:23 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.248 15:49:23 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:22:40.248 ************************************ 00:22:40.248 END TEST bdev_verify_big_io 00:22:40.248 ************************************ 00:22:40.506 15:49:23 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:40.506 15:49:23 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:40.506 15:49:23 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:40.506 15:49:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:40.506 ************************************ 00:22:40.506 START TEST bdev_write_zeroes 00:22:40.506 ************************************ 00:22:40.506 15:49:23 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:40.506 [2024-12-06 15:49:23.700911] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:22:40.506 [2024-12-06 15:49:23.701056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90733 ] 00:22:40.763 [2024-12-06 15:49:23.891217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.763 [2024-12-06 15:49:24.018391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.698 Running I/O for 1 seconds... 00:22:42.680 27279.00 IOPS, 106.56 MiB/s 00:22:42.680 Latency(us) 00:22:42.680 [2024-12-06T15:49:25.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.680 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:42.680 raid5f : 1.01 27249.87 106.44 0.00 0.00 4682.32 1566.02 6527.28 00:22:42.680 [2024-12-06T15:49:25.975Z] =================================================================================================================== 00:22:42.680 [2024-12-06T15:49:25.975Z] Total : 27249.87 106.44 0.00 0.00 4682.32 1566.02 6527.28 00:22:44.053 00:22:44.053 real 0m3.530s 00:22:44.053 user 0m3.036s 00:22:44.053 sys 0m0.365s 00:22:44.053 15:49:27 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:44.053 15:49:27 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:22:44.053 ************************************ 00:22:44.053 END TEST bdev_write_zeroes 00:22:44.053 ************************************ 00:22:44.053 15:49:27 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:44.053 15:49:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:44.053 15:49:27 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:44.053 15:49:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.053 ************************************ 00:22:44.053 START TEST bdev_json_nonenclosed 00:22:44.053 ************************************ 00:22:44.053 15:49:27 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:44.053 [2024-12-06 15:49:27.312694] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:22:44.054 [2024-12-06 15:49:27.312819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90792 ] 00:22:44.311 [2024-12-06 15:49:27.493724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.569 [2024-12-06 15:49:27.625105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.569 [2024-12-06 15:49:27.625234] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:22:44.569 [2024-12-06 15:49:27.625268] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:44.569 [2024-12-06 15:49:27.625282] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:44.828 00:22:44.828 real 0m0.674s 00:22:44.828 user 0m0.425s 00:22:44.828 sys 0m0.145s 00:22:44.828 15:49:27 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:44.828 15:49:27 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:22:44.828 ************************************ 00:22:44.828 END TEST bdev_json_nonenclosed 00:22:44.828 ************************************ 00:22:44.828 15:49:27 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:44.828 15:49:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:44.828 15:49:27 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:44.828 15:49:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.828 ************************************ 00:22:44.828 START TEST bdev_json_nonarray 00:22:44.828 ************************************ 00:22:44.828 15:49:27 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:44.828 [2024-12-06 15:49:28.056832] Starting SPDK v25.01-pre git sha1 a718549f7 / DPDK 24.03.0 initialization... 00:22:44.828 [2024-12-06 15:49:28.056958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90823 ] 00:22:45.086 [2024-12-06 15:49:28.240867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.086 [2024-12-06 15:49:28.367897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.086 [2024-12-06 15:49:28.368034] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:22:45.086 [2024-12-06 15:49:28.368059] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:45.086 [2024-12-06 15:49:28.368081] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:45.344 00:22:45.344 real 0m0.671s 00:22:45.344 user 0m0.403s 00:22:45.344 sys 0m0.164s 00:22:45.344 15:49:28 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.344 15:49:28 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:22:45.344 ************************************ 00:22:45.344 END TEST bdev_json_nonarray 00:22:45.344 ************************************ 00:22:45.602 15:49:28 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:22:45.602 15:49:28 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:22:45.602 15:49:28 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:22:45.602 15:49:28 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:22:45.602 15:49:28 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:22:45.602 15:49:28 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:22:45.602 15:49:28 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:45.602 15:49:28 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:22:45.602 15:49:28 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:22:45.602 15:49:28 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:22:45.602 15:49:28 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:22:45.602 00:22:45.602 real 0m50.533s 00:22:45.602 user 1m7.054s 00:22:45.602 sys 0m6.366s 00:22:45.602 15:49:28 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.603 15:49:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:45.603 ************************************ 00:22:45.603 END TEST blockdev_raid5f 00:22:45.603 ************************************ 00:22:45.603 15:49:28 -- spdk/autotest.sh@194 -- # uname -s 00:22:45.603 15:49:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:22:45.603 15:49:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:45.603 15:49:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:45.603 15:49:28 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@260 -- # timing_exit lib 00:22:45.603 15:49:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:45.603 15:49:28 -- common/autotest_common.sh@10 -- # set +x 00:22:45.603 15:49:28 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:22:45.603 15:49:28 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:45.603 15:49:28 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:45.603 15:49:28 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:22:45.603 15:49:28 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:22:45.603 15:49:28 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:22:45.603 15:49:28 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:22:45.603 15:49:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:45.603 15:49:28 -- common/autotest_common.sh@10 -- # set +x 00:22:45.603 15:49:28 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:22:45.603 15:49:28 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:22:45.603 15:49:28 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:22:45.603 15:49:28 -- common/autotest_common.sh@10 -- # set +x 00:22:48.130 INFO: APP EXITING 00:22:48.130 INFO: killing all VMs 00:22:48.130 INFO: killing vhost app 00:22:48.130 INFO: EXIT DONE 00:22:48.389 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:48.647 Waiting for block devices as requested 00:22:48.647 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:48.905 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:49.840 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:49.840 Cleaning 00:22:49.840 Removing: /var/run/dpdk/spdk0/config 00:22:49.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:49.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:49.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:49.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:49.840 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:49.840 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:49.840 Removing: /dev/shm/spdk_tgt_trace.pid56763 00:22:49.840 Removing: /var/run/dpdk/spdk0 00:22:49.840 Removing: /var/run/dpdk/spdk_pid56513 00:22:49.840 Removing: /var/run/dpdk/spdk_pid56763 00:22:49.840 Removing: /var/run/dpdk/spdk_pid56999 00:22:49.840 Removing: /var/run/dpdk/spdk_pid57108 00:22:49.840 Removing: /var/run/dpdk/spdk_pid57170 00:22:49.840 Removing: /var/run/dpdk/spdk_pid57309 00:22:49.840 Removing: /var/run/dpdk/spdk_pid57327 00:22:49.840 Removing: /var/run/dpdk/spdk_pid57543 00:22:49.840 Removing: /var/run/dpdk/spdk_pid57666 00:22:49.840 Removing: /var/run/dpdk/spdk_pid57779 00:22:49.840 Removing: /var/run/dpdk/spdk_pid57906 00:22:49.840 Removing: /var/run/dpdk/spdk_pid58020 00:22:49.840 Removing: /var/run/dpdk/spdk_pid58059 00:22:49.840 Removing: /var/run/dpdk/spdk_pid58096 00:22:49.840 Removing: /var/run/dpdk/spdk_pid58172 00:22:49.840 Removing: /var/run/dpdk/spdk_pid58300 00:22:49.840 Removing: /var/run/dpdk/spdk_pid58760 00:22:49.840 Removing: /var/run/dpdk/spdk_pid58841 00:22:49.840 Removing: /var/run/dpdk/spdk_pid58926 00:22:49.840 Removing: /var/run/dpdk/spdk_pid58945 00:22:49.840 Removing: /var/run/dpdk/spdk_pid59115 00:22:49.840 Removing: /var/run/dpdk/spdk_pid59131 00:22:49.840 Removing: /var/run/dpdk/spdk_pid59301 00:22:49.840 Removing: /var/run/dpdk/spdk_pid59317 00:22:49.840 Removing: /var/run/dpdk/spdk_pid59392 00:22:49.840 Removing: /var/run/dpdk/spdk_pid59416 00:22:49.840 Removing: /var/run/dpdk/spdk_pid59481 00:22:49.840 Removing: /var/run/dpdk/spdk_pid59509 00:22:49.840 Removing: /var/run/dpdk/spdk_pid59717 00:22:49.840 Removing: /var/run/dpdk/spdk_pid59748 00:22:49.840 Removing: /var/run/dpdk/spdk_pid59837 00:22:49.840 Removing: /var/run/dpdk/spdk_pid61230 00:22:49.840 Removing: /var/run/dpdk/spdk_pid61436 00:22:49.840 Removing: /var/run/dpdk/spdk_pid61582 00:22:49.840 Removing: /var/run/dpdk/spdk_pid62225 00:22:49.840 Removing: /var/run/dpdk/spdk_pid62442 00:22:49.840 Removing: /var/run/dpdk/spdk_pid62582 00:22:50.099 Removing: /var/run/dpdk/spdk_pid63231 00:22:50.099 Removing: /var/run/dpdk/spdk_pid63561 00:22:50.099 Removing: /var/run/dpdk/spdk_pid63701 00:22:50.099 Removing: /var/run/dpdk/spdk_pid65093 00:22:50.099 Removing: /var/run/dpdk/spdk_pid65346 00:22:50.099 Removing: /var/run/dpdk/spdk_pid65497 00:22:50.099 Removing: /var/run/dpdk/spdk_pid66878 00:22:50.099 Removing: /var/run/dpdk/spdk_pid67131 00:22:50.099 Removing: /var/run/dpdk/spdk_pid67282 00:22:50.099 Removing: /var/run/dpdk/spdk_pid68668 00:22:50.099 Removing: /var/run/dpdk/spdk_pid69115 00:22:50.099 Removing: /var/run/dpdk/spdk_pid69261 00:22:50.099 Removing: /var/run/dpdk/spdk_pid70748 00:22:50.099 Removing: /var/run/dpdk/spdk_pid71010 00:22:50.099 Removing: /var/run/dpdk/spdk_pid71160 00:22:50.099 Removing: /var/run/dpdk/spdk_pid72649 00:22:50.099 Removing: /var/run/dpdk/spdk_pid72920 00:22:50.099 Removing: /var/run/dpdk/spdk_pid73066 00:22:50.099 Removing: /var/run/dpdk/spdk_pid74553 00:22:50.099 Removing: /var/run/dpdk/spdk_pid75046 00:22:50.099 Removing: /var/run/dpdk/spdk_pid75201 00:22:50.099 Removing: /var/run/dpdk/spdk_pid75346 00:22:50.099 Removing: /var/run/dpdk/spdk_pid75776 00:22:50.099 Removing: /var/run/dpdk/spdk_pid76506 00:22:50.099 Removing: /var/run/dpdk/spdk_pid76893 00:22:50.099 Removing: /var/run/dpdk/spdk_pid77576 00:22:50.099 Removing: /var/run/dpdk/spdk_pid78018 00:22:50.099 Removing: /var/run/dpdk/spdk_pid78777 00:22:50.099 Removing: /var/run/dpdk/spdk_pid79188 00:22:50.099 Removing: /var/run/dpdk/spdk_pid81146 00:22:50.099 Removing: /var/run/dpdk/spdk_pid81591 00:22:50.099 Removing: /var/run/dpdk/spdk_pid82038 00:22:50.099 Removing: /var/run/dpdk/spdk_pid84120 00:22:50.099 Removing: /var/run/dpdk/spdk_pid84600 00:22:50.099 Removing: /var/run/dpdk/spdk_pid85123 00:22:50.099 Removing: /var/run/dpdk/spdk_pid86180 00:22:50.099 Removing: /var/run/dpdk/spdk_pid86503 00:22:50.099 Removing: /var/run/dpdk/spdk_pid87429 00:22:50.099 Removing: /var/run/dpdk/spdk_pid87758 00:22:50.099 Removing: /var/run/dpdk/spdk_pid88697 00:22:50.099 Removing: /var/run/dpdk/spdk_pid89021 00:22:50.099 Removing: /var/run/dpdk/spdk_pid89692 00:22:50.099 Removing: /var/run/dpdk/spdk_pid89987 00:22:50.099 Removing: /var/run/dpdk/spdk_pid90054 00:22:50.099 Removing: /var/run/dpdk/spdk_pid90102 00:22:50.099 Removing: /var/run/dpdk/spdk_pid90357 00:22:50.099 Removing: /var/run/dpdk/spdk_pid90534 00:22:50.099 Removing: /var/run/dpdk/spdk_pid90635 00:22:50.099 Removing: /var/run/dpdk/spdk_pid90733 00:22:50.099 Removing: /var/run/dpdk/spdk_pid90792 00:22:50.099 Removing: /var/run/dpdk/spdk_pid90823 00:22:50.099 Clean 00:22:50.360 15:49:33 -- common/autotest_common.sh@1453 -- # return 0 00:22:50.360 15:49:33 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:22:50.360 15:49:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:50.360 15:49:33 -- common/autotest_common.sh@10 -- # set +x 00:22:50.360 15:49:33 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:22:50.360 15:49:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:50.360 15:49:33 -- common/autotest_common.sh@10 -- # set +x 00:22:50.360 15:49:33 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:50.360 15:49:33 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:50.360 15:49:33 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:50.360 15:49:33 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:22:50.360 15:49:33 -- spdk/autotest.sh@398 -- # hostname 00:22:50.360 15:49:33 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:50.632 geninfo: WARNING: invalid characters removed from testname! 00:23:12.620 15:49:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:15.907 15:49:58 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:17.808 15:50:00 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:20.342 15:50:03 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:22.251 15:50:05 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:24.156 15:50:07 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:26.066 15:50:09 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:26.066 15:50:09 -- spdk/autorun.sh@1 -- $ timing_finish 00:23:26.066 15:50:09 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:23:26.066 15:50:09 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:26.066 15:50:09 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:26.066 15:50:09 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:26.325 + [[ -n 5216 ]] 00:23:26.325 + sudo kill 5216 00:23:26.336 [Pipeline] } 00:23:26.353 [Pipeline] // timeout 00:23:26.358 [Pipeline] } 00:23:26.373 [Pipeline] // stage 00:23:26.379 [Pipeline] } 00:23:26.393 [Pipeline] // catchError 00:23:26.404 [Pipeline] stage 00:23:26.406 [Pipeline] { (Stop VM) 00:23:26.419 [Pipeline] sh 00:23:26.702 + vagrant halt 00:23:29.237 ==> default: Halting domain... 00:23:35.818 [Pipeline] sh 00:23:36.099 + vagrant destroy -f 00:23:38.683 ==> default: Removing domain... 00:23:38.952 [Pipeline] sh 00:23:39.233 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:23:39.242 [Pipeline] } 00:23:39.257 [Pipeline] // stage 00:23:39.262 [Pipeline] } 00:23:39.276 [Pipeline] // dir 00:23:39.280 [Pipeline] } 00:23:39.294 [Pipeline] // wrap 00:23:39.300 [Pipeline] } 00:23:39.307 [Pipeline] // catchError 00:23:39.313 [Pipeline] stage 00:23:39.315 [Pipeline] { (Epilogue) 00:23:39.322 [Pipeline] sh 00:23:39.598 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:44.884 [Pipeline] catchError 00:23:44.886 [Pipeline] { 00:23:44.900 [Pipeline] sh 00:23:45.185 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:45.185 Artifacts sizes are good 00:23:45.194 [Pipeline] } 00:23:45.209 [Pipeline] // catchError 00:23:45.221 [Pipeline] archiveArtifacts 00:23:45.228 Archiving artifacts 00:23:45.326 [Pipeline] cleanWs 00:23:45.337 [WS-CLEANUP] Deleting project workspace... 00:23:45.337 [WS-CLEANUP] Deferred wipeout is used... 00:23:45.344 [WS-CLEANUP] done 00:23:45.346 [Pipeline] } 00:23:45.361 [Pipeline] // stage 00:23:45.366 [Pipeline] } 00:23:45.381 [Pipeline] // node 00:23:45.386 [Pipeline] End of Pipeline 00:23:45.418 Finished: SUCCESS